251 results on '"Wesley Nunes Gonçalves"'
Search Results
2. Assessment of UAV-Based Deep Learning for Corn Crop Analysis in Midwest Brazil
- Author
-
José Augusto Correa Martins, Alberto Yoshiriki Hisano Higuti, Aiesca Oliveira Pellegrin, Raquel Soares Juliano, Adriana Mello de Araújo, Luiz Alberto Pellegrin, Veraldo Liesenberg, Ana Paula Marques Ramos, Wesley Nunes Gonçalves, Diego André Sant’Ana, Hemerson Pistori, and José Marcato Junior
- Subjects
crop segmentation ,drones ,precision agriculture ,semantic segmentation ,Agriculture (General) ,S1-972 - Abstract
Crop segmentation, the process of identifying and delineating agricultural fields or specific crops within an image, plays a crucial role in precision agriculture, enabling farmers and public managers to make informed decisions regarding crop health, yield estimation, and resource allocation in Midwest Brazil. The crops (corn) in this region are being damaged by wild pigs and other diseases. For the quantification of corn fields, this paper applies novel computer-vision techniques and a new dataset of corn imagery composed of 1416 256 × 256 images and corresponding labels. We flew nine drone missions and classified wild pig damage in ten orthomosaics in different stages of growth using semi-automatic digitizing and deep-learning techniques. The period of crop-development analysis will range from early sprouting to the start of the drying phase. The objective of segmentation is to transform or simplify the representation of an image, making it more meaningful and easier to interpret. For the objective class, corn achieved an IoU of 77.92%, and for background 83.25%, using DeepLabV3+ architecture, 78.81% for corn, and 83.73% for background using SegFormer architecture. For the objective class, the accuracy metrics were achieved at 86.88% and for background 91.41% using DeepLabV3+, 88.14% for the objective, and 91.15% for background using SegFormer.
- Published
- 2024
- Full Text
- View/download PDF
3. A deep learning approach based on graphs to detect plantation lines
- Author
-
Diogo Nunes Gonçalves, José Marcato Junior, Mauro dos Santos de Arruda, Vanessa Jordão Marcato Fernandes, Ana Paula Marques Ramos, Danielle Elis Garcia Furuya, Lucas Prado Osco, Hongjie He, Lucio André de Castro Jorge, Jonathan Li, Farid Melgani, Hemerson Pistori, and Wesley Nunes Gonçalves
- Subjects
Remote sensing ,Convolutional neural networks ,Aerial imagery ,UAV ,Object detection ,Science (General) ,Q1-390 ,Social sciences (General) ,H1-99 - Abstract
Identifying plantation lines in aerial images of agricultural landscapes is re-quired for many automatic farming processes. Deep learning-based networks are among the most prominent methods to learn such patterns and extract this type of information from diverse imagery conditions. However, even state-of-the-art methods may stumble in complex plantation patterns. Here, we propose a deep learning approach based on graphs to detect plantation lines in UAV-based RGB imagery, presenting a challenging scenario containing spaced plants. The first module of our method extracts a feature map throughout the backbone, which consists of the initial layers of the VGG16. This feature map is used as an input to the Knowledge Estimation Module (KEM), organized in three concatenated branches for detecting 1) the plant positions, 2) the plantation lines, and 3) the displacement vectors between the plants. A graph modeling is applied considering each plant position on the image as vertices, and edges are formed between two vertices (i.e. plants). Finally, the edge is classified as pertaining to a certain plantation line based on three probabilities (higher than 0.5): i) in visual features obtained from the backbone; ii) a chance that the edge pixels belong to a line, from the KEM step; and iii) an alignment of the displacement vectors with the edge, also from the KEM step. Experiments were conducted initially in corn plantations with different growth stages and patterns with aerial RGB imagery to present the advantages of adopting each module. We assessed the generalization capability in the other two cultures (orange and eucalyptus) datasets. The proposed method was compared against state-of-the-art deep learning methods and achieved superior performance with a significant margin considering all three datasets. This approach is useful in extracting lines with spaced plantation patterns and could be implemented in scenarios where plantation gaps occur, generating lines with few-to-no interruptions.
- Published
- 2024
- Full Text
- View/download PDF
4. The Segment Anything Model (SAM) for remote sensing applications: From zero to one shot
- Author
-
Lucas Prado Osco, Qiusheng Wu, Eduardo Lopes de Lemos, Wesley Nunes Gonçalves, Ana Paula Marques Ramos, Jonathan Li, and José Marcato, Junior
- Subjects
Artificial intelligence ,Image segmentation ,Multi-scale datasets ,Text-prompt technique ,Physical geography ,GB3-5030 ,Environmental sciences ,GE1-350 - Abstract
Segmentation is an essential step for remote sensing image processing. This study aims to advance the application of the Segment Anything Model (SAM), an innovative image segmentation model by Meta AI, in the field of remote sensing image analysis. SAM is known for its exceptional generalization capabilities and zero-shot learning, making it a promising approach to processing aerial and orbital images from diverse geographical contexts. Our exploration involved testing SAM across multi-scale datasets using various input prompts, such as bounding boxes, individual points, and text descriptors. To enhance the model’s performance, we implemented a novel automated technique that combines a text-prompt-derived general example with one-shot training. This adjustment resulted in an improvement in accuracy, underscoring SAM’s potential for deployment in remote sensing imagery and reducing the need for manual annotation. Despite the limitations, encountered with lower spatial resolution images, SAM exhibits promising adaptability to remote sensing data analysis. We recommend future research to enhance the model’s proficiency through integration with supplementary fine-tuning techniques and other networks. Furthermore, we provide the open-source code of our modifications on online repositories, encouraging further and broader adaptations of SAM to the remote sensing domain.
- Published
- 2023
- Full Text
- View/download PDF
5. Pseudo-label semi-supervised learning for soybean monitoring
- Author
-
Gabriel Kirsten Menezes, Gilberto Astolfi, José Augusto Correa Martins, Everton Castelão Tetila, Adair da Silva Oliveira Junior, Diogo Nunes Gonçalves, José Marcato Junior, Jonathan Andrade Silva, Jonathan Li, Wesley Nunes Gonçalves, and Hemerson Pistori
- Subjects
Deep learning ,Superpixel ,Semi-supervised learning ,Soybean ,Weeds ,Unmanned aerial vehicle ,Agriculture (General) ,S1-972 ,Agricultural industries ,HD9000-9495 - Abstract
This paper presents a semi-supervised learning method based on superpixels and convolutional neural networks (CNNs) to assign and improve the identification of weeds in soybean crops. Despite its promising results, CNNs require massive amounts of labeled training data to learn, so we intend to improve the manual labeling phase with an automated pseudo-labeling process. We propose a method that uses an additional phase of mini-batch processing to fine-tune and assign pseudo labels to the images based on previously annotated SLIC segmentation during the algorithm training phase. This research paper shows that the proposed method improves the soybean monitoring accuracy compared with the traditionally trained methods using a tiny amount of labeled superpixels. There was an increase in the training time, but this is an expected result and even preferable to doing manual label annotation..
- Published
- 2023
- Full Text
- View/download PDF
6. Counting tilapia larvae using images captured by smartphones
- Author
-
Celso Soares Costa, Wesley Nunes Gonçalves, Vanda Alice Garcia Zanoni, Mauro dos Santos de Arruda, Mário de Araújo Carvalho, Edgar Nascimento, José Marcato Junior, Odair Diemer, and Hemerson Pistori
- Subjects
Tilapia aquaculture ,Precision livestock technology ,Computer vision ,Larvae counting ,Agriculture (General) ,S1-972 ,Agricultural industries ,HD9000-9495 - Abstract
In this work, we propose a new way for automatically counting fish larvae in Petri dishes using images captured by a standard smartphone. A new tilapia larvae image dataset for training and validating machine learning models has been created and used to validate a recent machine learning approach based on multi-stage model refinement of confidence maps. A mean absolute error (MAE) of 1.43 has been achieved using the proposed automatic larvae counter, indicating that the proposed approach is promising for larvae counting, as the mean number of larvae per image is more than 20. The proposed approach also achieved precision, recall, and F-measure values of 0.98, 0.92, and 0.95, respectively, for larvae detection using a dataset containing images from more than 6,000 manually annotated larvae.
- Published
- 2023
- Full Text
- View/download PDF
7. 3D building model generation from MLS point cloud and 3D mesh using multi-source data fusion
- Author
-
Weiquan Liu, Yu Zang, Zhangyue Xiong, Xuesheng Bian, Chenglu Wen, Xiaolei Lu, Cheng Wang, José Marcato, Junior, Wesley Nunes Gonçalves, and Jonathan Li
- Subjects
3D building model generation ,MLS point cloud ,3D mesh ,Multi-source data fusion ,Physical geography ,GB3-5030 ,Environmental sciences ,GE1-350 - Abstract
The high-precision generation of 3D building models is a controversial research topic in the field of smart cities. However, due to the limitations of single-source data, existing methods cannot simultaneously balance the local accuracy, overall integrity, and data scale of the building model. In this paper, we propose a novel 3D building model generation method based on multi-source 3D data fusion of 3D point cloud data and 3D mesh data with deep learning method. First, A Multi-Source 3D data Quality Evaluation Network (MS3DQE-Net) is proposed for evaluating the quality of 3D meshes and 3D point clouds. Then, the evaluation results are utilized to guide 3D building model generation. The MS3DQE-Net uses 3D meshes and 3D point clouds as inputs and fuses the learned features to obtain a more complete shape description. To train MS3DQE-Net, a multi-source 3D dataset is constructed, which collected from a real scene based on mobile laser scanning (MLS) 3D point clouds and 3D mesh data, including pairs of matching 3D meshes and 3D point clouds of the 3D building model. Specifically, to our knowledge, we are the first researchers to propose such multi-source 3D dataset. The experimental results show that MS3DQE-Net achieves a state-of-the-art performance in multi-source 3D data quality evaluation. We demonstrate the large-scale and high-precision, 3D building model generation approach on a campus.
- Published
- 2023
- Full Text
- View/download PDF
8. Transformers for mapping burned areas in Brazilian Pantanal and Amazon with PlanetScope imagery
- Author
-
Diogo Nunes Gonçalves, José Marcato, Junior, André Caceres Carrilho, Plabiany Rodrigo Acosta, Ana Paula Marques Ramos, Felipe David Georges Gomes, Lucas Prado Osco, Maxwell da Rosa Oliveira, José Augusto Correa Martins, Geraldo Alves Damasceno, Júnior, Márcio Santos de Araújo, Jonathan Li, Fábio Roque, Leonardo de Faria Peres, Wesley Nunes Gonçalves, and Renata Libonati
- Subjects
Multispectral imagery ,Deep learning ,Transfer learning ,Wildfire ,Physical geography ,GB3-5030 ,Environmental sciences ,GE1-350 - Abstract
Pantanal is the largest continuous wetland in the world, but its biodiversity is currently endangered by catastrophic wildfires that occurred in the last three years. The information available for the area only refers to the location and the extent of the burned areas based on medium and low-spatial resolution imagery, ranging from 30 m up to 1 km. However, to improve measurements and assist in environmental actions, robust methods are required to provide a detailed mapping on a higher-spatial scale of the burned areas, such as PlanetScope imagery with 3–5 m spatial resolution. As state-of-the-art, Deep Learning (DL) segmentation methods, in specific Transformed-based networks, are one of the best emerging approaches to extract information from remote sensing imagery. Here we combine Transformers DL methods and high-resolution planet imagery to map burned areas in the Brazilian Pantanal wetland. We first compared the performances of multiple DL-based networks, namely Segformer and DTP Transformers methods with CNN-based networks like PSPNet, FCN, DeepLabV3+, OCRNet, and ISANet, applied in Planet imagery, considering RGB and near-infrared within a large dataset of 1282 image patches (512 × 512 pixels). We later verified the generalization capability of the model for segmenting burned areas in different areas, located in the Brazilian Amazon, which is also worldwide known due to its environmental relevance. As a result, the two transformers based-methods, SegFormer (F1-score equals 95.91%) and DTP (F1-score equals 95.15%), provided the most accurate results in mapping burned forest areas in Pantanal. Results show that the combination of SegFormer and RGB+NIR image with pre-trained weights is the best option (F1-score of 96.52%) to distinguish burned from not-burned areas. When applying the generated model in two Brazilian Amazon forest regions, we achieved F1-score averages of 95.88% for burned areas. We conclude that Transformer-based networks are fit to deal with burned areas in two of the most relevant environmental areas of the world using high-spatial-resolution imagery.
- Published
- 2023
- Full Text
- View/download PDF
9. Mauritia flexuosa palm trees airborne mapping with deep convolutional neural network
- Author
-
Luciene Sales Dagher Arce, Lucas Prado Osco, Mauro dos Santos de Arruda, Danielle Elis Garcia Furuya, Ana Paula Marques Ramos, Camila Aoki, Arnildo Pott, Sarah Fatholahi, Jonathan Li, Fábio Fernando de Araújo, Wesley Nunes Gonçalves, and José Marcato Junior
- Subjects
Medicine ,Science - Abstract
Abstract Accurately mapping individual tree species in densely forested environments is crucial to forest inventory. When considering only RGB images, this is a challenging task for many automatic photogrammetry processes. The main reason for that is the spectral similarity between species in RGB scenes, which can be a hindrance for most automatic methods. This paper presents a deep learning-based approach to detect an important multi-use species of palm trees (Mauritia flexuosa; i.e., Buriti) on aerial RGB imagery. In South-America, this palm tree is essential for many indigenous and local communities because of its characteristics. The species is also a valuable indicator of water resources, which comes as a benefit for mapping its location. The method is based on a Convolutional Neural Network (CNN) to identify and geolocate singular tree species in a high-complexity forest environment. The results returned a mean absolute error (MAE) of 0.75 trees and an F1-measure of 86.9%. These results are better than Faster R-CNN and RetinaNet methods considering equal experiment conditions. In conclusion, the method presented is efficient to deal with a high-density forest scenario and can accurately map the location of single species like the M. flexuosa palm tree and may be useful for future frameworks.
- Published
- 2021
- Full Text
- View/download PDF
10. A deep learning-based mobile application for tree species mapping in RGB images
- Author
-
Mário de Araújo Carvalho, José Marcato, Junior, José Augusto Correa Martins, Pedro Zamboni, Celso Soares Costa, Henrique Lopes Siqueira, Márcio Santos Araújo, Diogo Nunes Gonçalves, Danielle Elis Garcia Furuya, Lucas Prado Osco, Ana Paula Marques Ramos, Jonathan Li, Amaury Antônio de Castro, Junior, and Wesley Nunes Gonçalves
- Subjects
Convolutional neural networks ,Mobile devices ,Trees mapping ,Remote sensing ,Physical geography ,GB3-5030 ,Environmental sciences ,GE1-350 - Abstract
Tree species mapping is an important type of information demanded in different study fields. However, this task can be expensive and time-consuming, making it difficult to monitor extensive areas. Hence, automatic methods are required to optimize tree species mapping. Here, we propose a deep learning-based mobile application tool for tree species classification in high-spatial-resolution RGB images. Several deep learning architectures were evaluated, including mobile networks and traditional models. A total of 2,349 images were used, of which 1,174 images consisted of the Dipteryx alata species and 1,175 images of other local species. These images were manually annotated and randomly divided into training (70%), validation (20%), and testing (10%) subsets, considering the five-fold cross-validation. We evaluated the accuracy and speed (GPU and CPU) of all the implemented deep learning architectures. We found out that the traditional networks have the best performance in terms of F1 score; however, mobile networks are faster. Inception V3 model achieved the best accuracy (F1 score of 97.4%), and MobileNet the worst (F1 score of 83.84%). The MobileNet obtained the best classification speed for CPU (with a mean execution time of 102.8 ms) and GPU (72.4 ms) units. For comparison, Inception V3 achieved a mean execution time of 1058.3 ms for CPU and 634.5 ms for GPU. We conclude that the mobile application proposed can be successfully used to run mobile networks and traditional networks for image classification, but the balance between accuracy and execution time needs to be carefully assessed. This mobile app is a tool for researchers, policymakers, non-governmental organizations, and the general public who intends to assess the tree species, providing a GUI-based platform for non-programmers to access the capabilities of deep learning models in complex classification tasks.
- Published
- 2022
- Full Text
- View/download PDF
11. Deep Learning Approaches to Spatial Downscaling of GRACE Terrestrial Water Storage Products Using EALCO Model Over Canada
- Author
-
Hongjie He, Ke Yang, Shusen Wang, Hasti Andon Petrosians, Ming Liu, Junhua Li, José Marcato Junior, Wesley Nunes Gonçalves, Lanying Wang, and Jonathan Li
- Subjects
Environmental sciences ,GE1-350 ,Technology - Abstract
Estimating terrestrial water storage (TWS) with high spatial resolution is crucial for hydrological and water resource management. Comparing to traditional in-situ data measurement, observation from space borne sensor such as Gravity Recovery and Climate Experiment (GRACE) satellites is quite effective to obtain a large-scale TWS data. However, the coarse resolution of the GRACE data restricts its application at a local scale. This paper presents three novel convolutional neural network (CNN) based approaches including the Super-Resolution CNN (SRCNN), the Very Deep Super-Resolution (VDSR), and the Residual Channel Attention Networks (RCAN) to spatial downscaling of the monthly GRACE TWS products using the outputs of the Ecological Assimilation of Land and Climate Observations (EALCO) model over Canada. We also compare the performance of CNN-based methods with the empirical linear regression-based downscaling method. All comparison results were evaluated by root mean square error (RMSE) between the reconstructed GRACE TWS and the original one. RMSEs over the matched pixels are 22.3, 14.4, 18.4 and 71.6 mm of SRCNN, VDSR, RCAN and linear regression-based method respectively. Obviously, VDSR shows the best accuracy among all methods. The result shows all CNN-based super resolution methods preform much better than traditional method in spatial downscaling.
- Published
- 2021
- Full Text
- View/download PDF
12. Road extraction in remote sensing data: A survey
- Author
-
Ziyi Chen, Liai Deng, Yuhua Luo, Dilong Li, José Marcato Junior, Wesley Nunes Gonçalves, Abdul Awal Md Nurunnabi, Jonathan Li, Cheng Wang, and Deren Li
- Subjects
Road extraction ,Review ,2D and 3D ,Remote sensing ,Point clouds ,Physical geography ,GB3-5030 ,Environmental sciences ,GE1-350 - Abstract
Automated extraction of roads from remotely sensed data come forth various usages ranging from digital twins for smart cities, intelligent transportation, urban planning, autonomous driving, to emergency management. Many studies have focused on promoting the progress of methods for automated road extraction from aerial and satellite optical images, synthetic aperture radar (SAR) images, and LiDAR point clouds. In the past 10 years, no a more comprehensive survey on this topic could be found in literature. This paper attempts to provide a comprehensive survey on road extraction methods that use 2D earth observing images and 3D LiDAR point clouds. In this review, we first present a tree-structure that separate the literature into 2D and 3D. Then, further methodologies level classification is demonstrated both in 2D and 3D. In 2D and 3D, we introduce and analyze the literature published in the last ten years. Except for the methodologies, we also review the aspects of data commonly used. Finally, this paper explores the existing challenges and future trends.
- Published
- 2022
- Full Text
- View/download PDF
13. The Potential of Visual ChatGPT for Remote Sensing
- Author
-
Lucas Prado Osco, Eduardo Lopes de Lemos, Wesley Nunes Gonçalves, Ana Paula Marques Ramos, and José Marcato Junior
- Subjects
artificial intelligence ,image analysis ,visual language model ,Science - Abstract
Recent advancements in Natural Language Processing (NLP), particularly in Large Language Models (LLMs), associated with deep learning-based computer vision techniques, have shown substantial potential for automating a variety of tasks. These are known as Visual LLMs and one notable model is Visual ChatGPT, which combines ChatGPT’s LLM capabilities with visual computation to enable effective image analysis. These models’ abilities to process images based on textual inputs can revolutionize diverse fields, and while their application in the remote sensing domain remains unexplored, it is important to acknowledge that novel implementations are to be expected. Thus, this is the first paper to examine the potential of Visual ChatGPT, a cutting-edge LLM founded on the GPT architecture, to tackle the aspects of image processing related to the remote sensing domain. Among its current capabilities, Visual ChatGPT can generate textual descriptions of images, perform canny edge and straight line detection, and conduct image segmentation. These offer valuable insights into image content and facilitate the interpretation and extraction of information. By exploring the applicability of these techniques within publicly available datasets of satellite images, we demonstrate the current model’s limitations in dealing with remote sensing images, highlighting its challenges and future prospects. Although still in early development, we believe that the combination of LLMs and visual models holds a significant potential to transform remote sensing image processing, creating accessible and practical application opportunities in the field.
- Published
- 2023
- Full Text
- View/download PDF
14. A review on deep learning in UAV remote sensing
- Author
-
Lucas Prado Osco, José Marcato Junior, Ana Paula Marques Ramos, Lúcio André de Castro Jorge, Sarah Narges Fatholahi, Jonathan de Andrade Silva, Edson Takashi Matsubara, Hemerson Pistori, Wesley Nunes Gonçalves, and Jonathan Li
- Subjects
Convolutional neural networks ,Remote sensing imagery ,Unmanned aerial vehicles ,Physical geography ,GB3-5030 ,Environmental sciences ,GE1-350 - Abstract
Deep Neural Networks (DNNs) learn representation from data with an impressive capability, and brought important breakthroughs for processing images, time-series, natural language, audio, video, and many others. In the remote sensing field, surveys and literature revisions specifically involving DNNs algorithms’ applications have been conducted in an attempt to summarize the amount of information produced in its subfields. Recently, Unmanned Aerial Vehicle (UAV)-based applications have dominated aerial sensing research. However, a literature revision that combines both “deep learning” and “UAV remote sensing” thematics has not yet been conducted. The motivation for our work was to present a comprehensive review of the fundamentals of Deep Learning (DL) applied in UAV-based imagery. We focused mainly on describing the classification and regression techniques used in recent applications with UAV-acquired data. For that, a total of 232 papers published in international scientific journal databases was examined. We gathered the published materials and evaluated their characteristics regarding the application, sensor, and technique used. We discuss how DL presents promising results and has the potential for processing tasks associated with UAV-based image data. Lastly, we project future perspectives, commentating on prominent DL paths to be explored in the UAV remote sensing field. This revision consisting of an approach to introduce, commentate, and summarize the state-of-the-art in UAV-based image applications with DNNs algorithms in diverse subfields of remote sensing, grouping it in the environmental, urban, and agricultural contexts.
- Published
- 2021
- Full Text
- View/download PDF
15. Active Fire Mapping on Brazilian Pantanal Based on Deep Learning and CBERS 04A Imagery
- Author
-
Leandro Higa, José Marcato Junior, Thiago Rodrigues, Pedro Zamboni, Rodrigo Silva, Laisa Almeida, Veraldo Liesenberg, Fábio Roque, Renata Libonati, Wesley Nunes Gonçalves, and Jonathan Silva
- Subjects
remote sensing ,wildfire ,object detection ,convolutional neural network ,Science - Abstract
Fire in Brazilian Pantanal represents a serious threat to biodiversity. The Brazilian National Institute of Spatial Research (INPE) has a program named Queimadas, which estimated from January 2020 to October 2020, a burned area in Pantanal of approximately 40,606 km2. This program also provides daily data of active fire (fires spots) from a methodology that uses MODIS (Aqua and Terra) sensor data as reference satellites, which presents limitations mainly when dealing with small active fires. Remote sensing researches on active fire dynamics have contributed to wildfire comprehension, despite generally applying low spatial resolution data. Convolutional Neural Networks (CNN) associated with high- and medium-resolution remote sensing data may provide a complementary strategy to small active fire detection. We propose an approach based on object detection methods to map active fire in the Pantanal. In this approach, a post-processing strategy is adopted based on Non-Max Suppression (NMS) to reduce the number of highly overlapped detections. Extensive experiments were conducted, generating 150 models, as five-folds were considered. We generate a public dataset with 775-RGB image patches from the Wide Field Imager (WFI) sensor onboard the China Brazil Earth Resources Satellite (CBERS) 4A. The patches resulted from 49 images acquired from May to August 2020 and present a spatial and temporal resolutions of 55 m and five days, respectively. The proposed approach uses a point (active fire) to generate squared bounding boxes. Our findings indicate that accurate results were achieved, even considering recent images from 2021, showing the generalization capability of our models to complement other researches and wildfire databases such as the current program Queimadas in detecting active fire in this complex environment. The approach may be extended and evaluated in other environmental conditions worldwide where active fire detection is still a required information in fire fighting and rescue initiatives.
- Published
- 2022
- Full Text
- View/download PDF
16. Predicting Days to Maturity, Plant Height, and Grain Yield in Soybean: A Machine and Deep Learning Approach Using Multispectral Data
- Author
-
Paulo Eduardo Teodoro, Larissa Pereira Ribeiro Teodoro, Fábio Henrique Rojo Baio, Carlos Antonio da Silva Junior, Regimar Garcia dos Santos, Ana Paula Marques Ramos, Mayara Maezano Faita Pinheiro, Lucas Prado Osco, Wesley Nunes Gonçalves, Alexsandro Monteiro Carneiro, José Marcato Junior, Hemerson Pistori, and Luciano Shozo Shiratsuchi
- Subjects
precision agriculture ,multispectral remote sensing data ,shallow learner ,deep neural network ,Science - Abstract
In soybean, there is a lack of research aiming to compare the performance of machine learning (ML) and deep learning (DL) methods to predict more than one agronomic variable, such as days to maturity (DM), plant height (PH), and grain yield (GY). As these variables are important to developing an overall precision farming model, we propose a machine learning approach to predict DM, PH, and GY for soybean cultivars based on multispectral bands. The field experiment considered 524 genotypes of soybeans in the 2017/2018 and 2018/2019 growing seasons and a multitemporal–multispectral dataset collected by embedded sensor in an unmanned aerial vehicle (UAV). We proposed a multilayer deep learning regression network, trained during 2000 epochs using an adaptive subgradient method, a random Gaussian initialization, and a 50% dropout in the first hidden layer for regularization. Three different scenarios, including only spectral bands, only vegetation indices, and spectral bands plus vegetation indices, were adopted to infer each variable (PH, DM, and GY). The DL model performance was compared against shallow learning methods such as random forest (RF), support vector machine (SVM), and linear regression (LR). The results indicate that our approach has the potential to predict soybean-related variables using multispectral bands only. Both DL and RF models presented a strong (r surpassing 0.77) prediction capacity for the PH variable, regardless of the adopted input variables group. Our results demonstrated that the DL model (r = 0.66) was superior to predict DM when the input variable was the spectral bands. For GY, all machine learning models evaluated presented similar performance (r ranging from 0.42 to 0.44) for each tested scenario. In conclusion, this study demonstrated an efficient approach to a computational solution capable of predicting multiple important soybean crop variables based on remote sensing data. Future research could benefit from the information presented here and be implemented in subsequent processes related to soybean cultivars or other types of agronomic crops.
- Published
- 2021
- Full Text
- View/download PDF
17. Semantic Segmentation of Tree-Canopy in Urban Environment with Pixel-Wise Deep Learning
- Author
-
José Augusto Correa Martins, Keiller Nogueira, Lucas Prado Osco, Felipe David Georges Gomes, Danielle Elis Garcia Furuya, Wesley Nunes Gonçalves, Diego André Sant’Ana, Ana Paula Marques Ramos, Veraldo Liesenberg, Jefersson Alex dos Santos, Paulo Tarso Sanches de Oliveira, and José Marcato Junior
- Subjects
remote sensing ,image segmentation ,sustainability ,convolutional neural network ,urban environment ,Science - Abstract
Urban forests are an important part of any city, given that they provide several environmental benefits, such as improving urban drainage, climate regulation, public health, biodiversity, and others. However, tree detection in cities is challenging, given the irregular shape, size, occlusion, and complexity of urban areas. With the advance of environmental technologies, deep learning segmentation mapping methods can map urban forests accurately. We applied a region-based CNN object instance segmentation algorithm for the semantic segmentation of tree canopies in urban environments based on aerial RGB imagery. To the best of our knowledge, no study investigated the performance of deep learning-based methods for segmentation tasks inside the Cerrado biome, specifically for urban tree segmentation. Five state-of-the-art architectures were evaluated, namely: Fully Convolutional Network; U-Net; SegNet; Dynamic Dilated Convolution Network and DeepLabV3+. The experimental analysis showed the effectiveness of these methods reporting results such as pixel accuracy of 96,35%, an average accuracy of 91.25%, F1-score of 91.40%, Kappa of 82.80% and IoU of 73.89%. We also determined the inference time needed per area, and the deep learning methods investigated after the training proved to be suitable to solve this task, providing fast and effective solutions with inference time varying from 0.042 to 0.153 minutes per hectare. We conclude that the semantic segmentation of trees inside urban environments is highly achievable with deep neural networks. This information could be of high importance to decision-making and may contribute to the management of urban systems. It should be also important to mention that the dataset used in this work is available on our website.
- Published
- 2021
- Full Text
- View/download PDF
18. A Building Roof Identification CNN Based on Interior-Edge-Adjacency Features Using Hyperspectral Imagery
- Author
-
Chengming Ye, Hongfu Li, Chunming Li, Xin Liu, Yao Li, Jonathan Li, Wesley Nunes Gonçalves, and José Marcato Junior
- Subjects
hyperspectral image ,spectral and spatial feature ,Convolutional Neural Network (CNN) ,interior-edge-adjacency features ,building roof ,Science - Abstract
Hyperspectral remote sensing can obtain both spatial and spectral information of ground objects. It is an important prerequisite for a hyperspectral remote sensing application to make good use of spectral and image features. Therefore, we improved the Convolutional Neural Network (CNN) model by extracting interior-edge-adjacency features of building roof and proposed a new CNN model with a flexible structure: Building Roof Identification CNN (BRI-CNN). Our experimental results demonstrated that the BRI-CNN can not only extract interior-edge-adjacency features of building roof, but also change the weight of these different features during the training process, according to selected samples. Our approach was tested using the Indian Pines (IP) data set and our comparative study indicates that the BRI-CNN model achieves at least 0.2% higher overall accuracy than that of the capsule network model, and more than 2% than that of CNN models.
- Published
- 2021
- Full Text
- View/download PDF
19. Benchmarking Anchor-Based and Anchor-Free State-of-the-Art Deep Learning Methods for Individual Tree Detection in RGB High-Resolution Images
- Author
-
Pedro Zamboni, José Marcato Junior, Jonathan de Andrade Silva, Gabriela Takahashi Miyoshi, Edson Takashi Matsubara, Keiller Nogueira, and Wesley Nunes Gonçalves
- Subjects
object detection ,convolutional neural network ,remote sensing ,Science - Abstract
Urban forests contribute to maintaining livability and increase the resilience of cities in the face of population growth and climate change. Information about the geographical distribution of individual trees is essential for the proper management of these systems. RGB high-resolution aerial images have emerged as a cheap and efficient source of data, although detecting and mapping single trees in an urban environment is a challenging task. Thus, we propose the evaluation of novel methods for single tree crown detection, as most of these methods have not been investigated in remote sensing applications. A total of 21 methods were investigated, including anchor-based (one and two-stage) and anchor-free state-of-the-art deep-learning methods. We used two orthoimages divided into 220 non-overlapping patches of 512 × 512 pixels with a ground sample distance (GSD) of 10 cm. The orthoimages were manually annotated, and 3382 single tree crowns were identified as the ground-truth. Our findings show that the anchor-free detectors achieved the best average performance with an AP50 of 0.686. We observed that the two-stage anchor-based and anchor-free methods showed better performance for this task, emphasizing the FSAF, Double Heads, CARAFE, ATSS, and FoveaBox models. RetinaNet, which is currently commonly applied in remote sensing, did not show satisfactory performance, and Faster R-CNN had lower results than the best methods but with no statistically significant difference. Our findings contribute to a better understanding of the performance of novel deep-learning methods in remote sensing applications and could be used as an indicator of the most suitable methods in such applications.
- Published
- 2021
- Full Text
- View/download PDF
20. A Machine Learning Approach for Mapping Forest Vegetation in Riparian Zones in an Atlantic Biome Environment Using Sentinel-2 Imagery
- Author
-
Danielle Elis Garcia Furuya, João Alex Floriano Aguiar, Nayara V. Estrabis, Mayara Maezano Faita Pinheiro, Michelle Taís Garcia Furuya, Danillo Roberto Pereira, Wesley Nunes Gonçalves, Veraldo Liesenberg, Jonathan Li, José Marcato Junior, Lucas Prado Osco, and Ana Paula Marques Ramos
- Subjects
machine learning ,decision tree ,sentinel images ,image classification ,forest vegetation mapping ,Science - Abstract
Riparian zones consist of important environmental regions, specifically to maintain the quality of water resources. Accurately mapping forest vegetation in riparian zones is an important issue, since it may provide information about numerous surface processes that occur in these areas. Recently, machine learning algorithms have gained attention as an innovative approach to extract information from remote sensing imagery, including to support the mapping task of vegetation areas. Nonetheless, studies related to machine learning application for forest vegetation mapping in the riparian zones exclusively is still limited. Therefore, this paper presents a framework for forest vegetation mapping in riparian zones based on machine learning models using orbital multispectral images. A total of 14 Sentinel-2 images registered throughout the year, covering a large riparian zone of a portion of a wide river in the Pontal do Paranapanema region, São Paulo state, Brazil, was adopted as the dataset. This area is mainly composed of the Atlantic Biome vegetation, and it is near to the last primary fragment of its biome, being an important region from the environmental planning point of view. We compared the performance of multiple machine learning algorithms like decision tree (DT), random forest (RF), support vector machine (SVM), and normal Bayes (NB). We evaluated different dates and locations with all models. Our results demonstrated that the DT learner has, overall, the highest accuracy in this task. The DT algorithm also showed high accuracy when applied on different dates and in the riparian zone of another river. We conclude that the proposed approach is appropriated to accurately map forest vegetation in riparian zones, including temporal context.
- Published
- 2020
- Full Text
- View/download PDF
21. ATSS Deep Learning-Based Approach to Detect Apple Fruits
- Author
-
Leonardo Josoé Biffi, Edson Mitishita, Veraldo Liesenberg, Anderson Aparecido dos Santos, Diogo Nunes Gonçalves, Nayara Vasconcelos Estrabis, Jonathan de Andrade Silva, Lucas Prado Osco, Ana Paula Marques Ramos, Jorge Antonio Silva Centeno, Marcos Benedito Schimalski, Leo Rufato, Sílvio Luís Rafaeli Neto, José Marcato Junior, and Wesley Nunes Gonçalves
- Subjects
convolutional neural network ,object detection ,precision agriculture ,Science - Abstract
In recent years, many agriculture-related problems have been evaluated with the integration of artificial intelligence techniques and remote sensing systems. Specifically, in fruit detection problems, several recent works were developed using Deep Learning (DL) methods applied in images acquired in different acquisition levels. However, the increasing use of anti-hail plastic net cover in commercial orchards highlights the importance of terrestrial remote sensing systems. Apples are one of the most highly-challenging fruits to be detected in images, mainly because of the target occlusion problem occurrence. Additionally, the introduction of high-density apple tree orchards makes the identification of single fruits a real challenge. To support farmers to detect apple fruits efficiently, this paper presents an approach based on the Adaptive Training Sample Selection (ATSS) deep learning method applied to close-range and low-cost terrestrial RGB images. The correct identification supports apple production forecasting and gives local producers a better idea of forthcoming management practices. The main advantage of the ATSS method is that only the center point of the objects is labeled, which is much more practicable and realistic than bounding-box annotations in heavily dense fruit orchards. Additionally, we evaluated other object detection methods such as RetinaNet, Libra Regions with Convolutional Neural Network (R-CNN), Cascade R-CNN, Faster R-CNN, Feature Selective Anchor-Free (FSAF), and High-Resolution Network (HRNet). The study area is a highly-dense apple orchard consisting of Fuji Suprema apple fruits (Malus domestica Borkh) located in a smallholder farm in the state of Santa Catarina (southern Brazil). A total of 398 terrestrial images were taken nearly perpendicularly in front of the trees by a professional camera, assuring both a good vertical coverage of the apple trees in terms of heights and overlapping between picture frames. After, the high-resolution RGB images were divided into several patches for helping the detection of small and/or occluded apples. A total of 3119, 840, and 2010 patches were used for training, validation, and testing, respectively. Moreover, the proposed method’s generalization capability was assessed by applying simulated image corruptions to the test set images with different severity levels, including noise, blurs, weather, and digital processing. Experiments were also conducted by varying the bounding box size (80, 100, 120, 140, 160, and 180 pixels) in the image original for the proposed approach. Our results showed that the ATSS-based method slightly outperformed all other deep learning methods, between 2.4% and 0.3%. Also, we verified that the best result was obtained with a bounding box size of 160 × 160 pixels. The proposed method was robust regarding most of the corruption, except for snow, frost, and fog weather conditions. Finally, a benchmark of the reported dataset is also generated and publicly available.
- Published
- 2020
- Full Text
- View/download PDF
22. Leaf Nitrogen Concentration and Plant Height Prediction for Maize Using UAV-Based Multispectral Imagery and Machine Learning Techniques
- Author
-
Lucas Prado Osco, José Marcato Junior, Ana Paula Marques Ramos, Danielle Elis Garcia Furuya, Dthenifer Cordeiro Santana, Larissa Pereira Ribeiro Teodoro, Wesley Nunes Gonçalves, Fábio Henrique Rojo Baio, Hemerson Pistori, Carlos Antonio da Silva Junior, and Paulo Eduardo Teodoro
- Subjects
UAV ,random forest ,nitrogen ,maize ,Science - Abstract
Under ideal conditions of nitrogen (N), maize (Zea mays L.) can grow to its full potential, reaching maximum plant height (PH). As a rapid and nondestructive approach, the analysis of unmanned aerial vehicles (UAV)-based imagery may be of assistance to estimate N and height. The main objective of this study is to present an approach to predict leaf nitrogen concentration (LNC, g kg−1) and PH (m) with machine learning techniques and UAV-based multispectral imagery in maize plants. An experiment with 11 maize cultivars under two rates of N fertilization was carried during the 2017/2018 and 2018/2019 crop seasons. The spectral vegetation indices (VI) normalized difference vegetation index (NDVI), normalized difference red-edge index (NDRE), green normalized difference vegetation (GNDVI), and the soil adjusted vegetation index (SAVI) were extracted from the images and, in a computational system, used alongside the spectral bands as input parameters for different machine learning models. A randomized 10-fold cross-validation strategy, with a total of 100 replicates, was used to evaluate the performance of 9 supervised machine learning (ML) models using the Pearson’s correlation coefficient (r), mean absolute error (MAE), coefficient of regression (R²), and root mean square error (RMSE) metrics. The results indicated that the random forest (RF) algorithm performed better, with r and RMSE, respectively, of 0.91 and 1.9 g.kg−¹ for LNC, and 0.86 and 0.17 m for PH. It was also demonstrated that VIs contributed more to the algorithm’s performances than individual spectral bands. This study concludes that the RF model is appropriate to predict both agronomic variables in maize and may help farmers to monitor their plants based upon their LNC and PH diagnosis and use this knowledge to improve their production rates in the subsequent seasons.
- Published
- 2020
- Full Text
- View/download PDF
23. A Novel Deep Learning Method to Identify Single Tree Species in UAV-Based Hyperspectral Images
- Author
-
Gabriela Takahashi Miyoshi, Mauro dos Santos Arruda, Lucas Prado Osco, José Marcato Junior, Diogo Nunes Gonçalves, Nilton Nobuhiro Imai, Antonio Maria Garcia Tommaselli, Eija Honkavaara, and Wesley Nunes Gonçalves
- Subjects
high-density object ,data-reduction ,band selection ,convolutional neural network ,tree species identification ,Science - Abstract
Deep neural networks are currently the focus of many remote sensing approaches related to forest management. Although they return satisfactory results in most tasks, some challenges related to hyperspectral data remain, like the curse of data dimensionality. In forested areas, another common problem is the highly-dense distribution of trees. In this paper, we propose a novel deep learning approach for hyperspectral imagery to identify single-tree species in highly-dense areas. We evaluated images with 25 spectral bands ranging from 506 to 820 nm taken over a semideciduous forest of the Brazilian Atlantic biome. We included in our network’s architecture a band combination selection phase. This phase learns from multiple combinations between bands which contributed the most for the tree identification task. This is followed by a feature map extraction and a multi-stage model refinement of the confidence map to produce accurate results of a highly-dense target. Our method returned an f-measure, precision and recall values of 0.959, 0.973, and 0.945, respectively. The results were superior when compared with a principal component analysis (PCA) approach. Compared to other learning methods, ours estimate a combination of hyperspectral bands that most contribute to the mentioned task within the network’s architecture. With this, the proposed method achieved state-of-the-art performance for detecting and geolocating individual tree-species in UAV-based hyperspectral images in a complex forest.
- Published
- 2020
- Full Text
- View/download PDF
24. A Machine Learning Framework to Predict Nutrient Content in Valencia-Orange Leaf Hyperspectral Measurements
- Author
-
Lucas Prado Osco, Ana Paula Marques Ramos, Mayara Maezano Faita Pinheiro, Érika Akemi Saito Moriya, Nilton Nobuhiro Imai, Nayara Estrabis, Felipe Ianczyk, Fábio Fernando de Araújo, Veraldo Liesenberg, Lúcio André de Castro Jorge, Jonathan Li, Lingfei Ma, Wesley Nunes Gonçalves, José Marcato Junior, and José Eduardo Creste
- Subjects
spectroscopy ,proximal sensor ,macronutrient ,micronutrient ,artificial intelligence ,Science - Abstract
This paper presents a framework based on machine learning algorithms to predict nutrient content in leaf hyperspectral measurements. This is the first approach to evaluate macro- and micronutrient content with both machine learning and reflectance/first-derivative data. For this, citrus-leaves collected at a Valencia-orange orchard were used. Their spectral data was measured with a Fieldspec ASD FieldSpec® HandHeld 2 spectroradiometer and the surface reflectance and first-derivative spectra from the spectral range of 380 to 1020 nm (640 spectral bands) was evaluated. A total of 320 spectral signatures were collected, and the leaf-nutrient content (N, P, K, Mg, S, Cu, Fe, Mn, and Zn) was associated with them. For this, 204,800 (320 × 640) combinations were used. The following machine learning algorithms were used in this framework: k-Nearest Neighbor (kNN), Lasso Regression, Ridge Regression, Support Vector Machine (SVM), Artificial Neural Network (ANN), Decision Tree (DT), and Random Forest (RF). The training methods were assessed based on Cross-Validation and Leave-One-Out. The Relief-F metric of the algorithms’ prediction was used to determine the most contributive wavelength or spectral region associated with each nutrient. This approach was able to return, with high predictions (R2), nutrients like N (0.912), Mg (0.832), Cu (0.861), Mn (0.898), and Zn (0.855), and, to a lesser extent, P (0.771), K (0.763), and S (0.727). These accuracies were obtained with different algorithms, but RF was the most suitable to model most of them. The results indicate that, for the Valencia-orange leaves, surface reflectance data is more suitable to predict macronutrients, while first-derivative spectra is better linked to micronutrients. A final contribution of this study is the identification of the wavelengths responsible for contributing to these predictions.
- Published
- 2020
- Full Text
- View/download PDF
25. Applying Fully Convolutional Architectures for Semantic Segmentation of a Single Tree Species in Urban Environment on High Resolution UAV Optical Imagery
- Author
-
Daliana Lobo Torres, Raul Queiroz Feitosa, Patrick Nigri Happ, Laura Elena Cué La Rosa, José Marcato Junior, José Martins, Patrik Olã Bressan, Wesley Nunes Gonçalves, and Veraldo Liesenberg
- Subjects
deep learning ,fully convolution neural networks ,semantic segmentation ,unmanned aerial vehicle (uav) ,Chemical technology ,TP1-1185 - Abstract
This study proposes and evaluates five deep fully convolutional networks (FCNs) for the semantic segmentation of a single tree species: SegNet, U-Net, FC-DenseNet, and two DeepLabv3+ variants. The performance of the FCN designs is evaluated experimentally in terms of classification accuracy and computational load. We also verify the benefits of fully connected conditional random fields (CRFs) as a post-processing step to improve the segmentation maps. The analysis is conducted on a set of images captured by an RGB camera aboard a UAV flying over an urban area. The dataset also contains a mask that indicates the occurrence of an endangered species called Dipteryx alata Vogel, also known as cumbaru, taken as the species to be identified. The experimental analysis shows the effectiveness of each design and reports average overall accuracy ranging from 88.9% to 96.7%, an F1-score between 87.0% and 96.1%, and IoU from 77.1% to 92.5%. We also realize that CRF consistently improves the performance, but at a high computational cost.
- Published
- 2020
- Full Text
- View/download PDF
26. Assessment of CNN-Based Methods for Individual Tree Detection on Images Captured by RGB Cameras Attached to UAVs
- Author
-
Anderson Aparecido dos Santos, José Marcato Junior, Márcio Santos Araújo, David Robledo Di Martini, Everton Castelão Tetila, Henrique Lopes Siqueira, Camila Aoki, Anette Eltner, Edson Takashi Matsubara, Hemerson Pistori, Raul Queiroz Feitosa, Veraldo Liesenberg, and Wesley Nunes Gonçalves
- Subjects
object-detection ,deep learning ,remote sensing ,Chemical technology ,TP1-1185 - Abstract
Detection and classification of tree species from remote sensing data were performed using mainly multispectral and hyperspectral images and Light Detection And Ranging (LiDAR) data. Despite the comparatively lower cost and higher spatial resolution, few studies focused on images captured by Red-Green-Blue (RGB) sensors. Besides, the recent years have witnessed an impressive progress of deep learning methods for object detection. Motivated by this scenario, we proposed and evaluated the usage of Convolutional Neural Network (CNN)-based methods combined with Unmanned Aerial Vehicle (UAV) high spatial resolution RGB imagery for the detection of law protected tree species. Three state-of-the-art object detection methods were evaluated: Faster Region-based Convolutional Neural Network (Faster R-CNN), YOLOv3 and RetinaNet. A dataset was built to assess the selected methods, comprising 392 RBG images captured from August 2018 to February 2019, over a forested urban area in midwest Brazil. The target object is an important tree species threatened by extinction known as Dipteryx alata Vogel (Fabaceae). The experimental analysis delivered average precision around 92% with an associated processing times below 30 miliseconds.
- Published
- 2019
- Full Text
- View/download PDF
27. Prototypical Contrastive Network for Imbalanced Aerial Image Segmentation.
- Author
-
Keiller Nogueira, Mayara Maezano Faita Pinheiro, Ana Paula Marques Ramos, Wesley Nunes Gonçalves, José Marcato Junior, and Jefersson A. dos Santos
- Published
- 2024
- Full Text
- View/download PDF
28. NLP-Based Fusion Approach to Robust Image Captioning.
- Author
-
Riccardo Ricci, Farid Melgani, José Marcato Junior, and Wesley Nunes Gonçalves
- Published
- 2024
- Full Text
- View/download PDF
29. MTLSegFormer: Multi-task Learning with Transformers for Semantic Segmentation in Precision Agriculture.
- Author
-
Diogo Nunes Gonçalves, José Marcato Jr., Pedro Zamboni, Hemerson Pistori, Jonathan Li 0001, Keiller Nogueira, and Wesley Nunes Gonçalves
- Published
- 2023
- Full Text
- View/download PDF
30. Robust Image Captioning with Post-Generation Ensemble Method.
- Author
-
Riccardo Ricci, Farid Melgani, José Marcato Junior, and Wesley Nunes Gonçalves
- Published
- 2023
- Full Text
- View/download PDF
31. A Click-Based Interactive Segmentation Network for Point Clouds.
- Author
-
Wentao Sun, Zhipeng Luo, Yiping Chen, Huxiong Li, José Marcato Junior, Wesley Nunes Gonçalves, and Jonathan Li 0001
- Published
- 2023
- Full Text
- View/download PDF
32. BoundaryNet: Extraction and Completion of Road Boundaries With Deep Learning Using Mobile Laser Scanning Point Clouds and Satellite Imagery.
- Author
-
Lingfei Ma, Ying Li 0036, Jonathan Li 0001, José Marcato Junior, Wesley Nunes Gonçalves, and Michael A. Chapman
- Published
- 2022
- Full Text
- View/download PDF
33. 3D Vehicle Detection Using Multi-Level Fusion From Point Clouds and Images.
- Author
-
Kun Zhao, Lingfei Ma, Yu Meng, Li Liu, Junbo Wang, José Marcato Junior, Wesley Nunes Gonçalves, and Jonathan Li 0001
- Published
- 2022
- Full Text
- View/download PDF
34. Building Instance Extraction Method Based on Improved Hybrid Task Cascade.
- Author
-
Xiaoxue Liu, Yiping Chen, Mingqiang Wei, Cheng Wang 0003, Wesley Nunes Gonçalves, José Marcato Junior, and Jonathan Li 0001
- Published
- 2022
- Full Text
- View/download PDF
35. RADAM: Texture Recognition through Randomized Aggregated Encoding of Deep Activation Maps.
- Author
-
Leonardo F. S. Scabini, Kallil M. C. Zielinski, Lucas Correia Ribas, Wesley Nunes Gonçalves, Bernard De Baets, and Odemir M. Bruno
- Published
- 2023
- Full Text
- View/download PDF
36. Segmentation of Tree Canopies in Urban Environments Using Dilated Convolutional Neural Network.
- Author
-
José Augusto Correa Martins, Keiller Nogueira, Pedro Zamboni, Paulo Tarso Sanches de Oliveira, Wesley Nunes Gonçalves, Jefersson A. dos Santos, and José Marcato Junior
- Published
- 2021
- Full Text
- View/download PDF
37. Evaluating Different Deep Learning Models for Automatic Water Segmentation.
- Author
-
Thales Shoiti Akiyama, José Marcato Junior, Wesley Nunes Gonçalves, Mário de Araújo Carvalho, and Anette Eltner
- Published
- 2021
- Full Text
- View/download PDF
38. Assessment of CNN-Based Methods for Single Tree Detection on High-Resolution RGB Images in Urban Areas.
- Author
-
Pedro Alberto Pereira ZamboniThgeThe, José Marcato Junior, Gabriela Takahashi Miyoshi, Jonathan de Andrade Silva, José Augusto Correa Martins, and Wesley Nunes Gonçalves
- Published
- 2021
- Full Text
- View/download PDF
39. Deep Learning and Google Earth Engine Applied to Mapping Eucalyptus.
- Author
-
João Otavio Nascimento Firigato, José Marcato Junior, Wesley Nunes Gonçalves, and Vitor Matheus Bacani
- Published
- 2021
- Full Text
- View/download PDF
40. Retinanet Deep Learning-Based Approach to Detect Termite Mounds in Eucalyptus Forests.
- Author
-
Juan Sales, José Marcato Junior, Henrique Lopes Siqueira, Maurício de Souza, Edson Takashi Matsubara, and Wesley Nunes Gonçalves
- Published
- 2021
- Full Text
- View/download PDF
41. Integration of Photogrammetry and Deep Learning in Earth Observation Applications.
- Author
-
José Marcato Junior, Pedro Zamboni, Mariana Batista Campos, Ana Paula Marques Ramos, Lucas Prado Osco, Jonathan de Andrade Silva, Wesley Nunes Gonçalves, and Jonathan Li 0001
- Published
- 2021
- Full Text
- View/download PDF
42. Capsule-Based Networks for Road Marking Extraction and Classification From Mobile LiDAR Point Clouds.
- Author
-
Lingfei Ma, Ying Li 0036, Jonathan Li 0001, Yongtao Yu, José Marcato Junior, Wesley Nunes Gonçalves, and Michael A. Chapman
- Published
- 2021
- Full Text
- View/download PDF
43. Identifying Building Rooftops in Hyperspectral Imagery Using CNN With Pure Pixel Index.
- Author
-
Yao Li 0013, Chengming Ye, Yonggang Ge, José Marcato Junior, Wesley Nunes Gonçalves, and Jonathan Li 0001
- Published
- 2021
- Full Text
- View/download PDF
44. Image Segmentation and Classification with SLIC Superpixel and Convolutional Neural Network in Forest Context.
- Author
-
José Augusto Correa Martins, José Marcato Junior, Geazy Menezes, Hemerson Pistori, Diego Sant'Ana, and Wesley Nunes Gonçalves
- Published
- 2019
- Full Text
- View/download PDF
45. Importance of Vertices in Complex Networks Applied to Texture Analysis.
- Author
-
Sávio Vinícius Albieri Barone Cantero, Diogo Nunes Gonçalves, Leonardo F. S. Scabini, and Wesley Nunes Gonçalves
- Published
- 2020
- Full Text
- View/download PDF
46. Characterization of MSS Channel Reflectance and Derived Spectral Indices for Building Consistent Landsat 1-5 Data Record.
- Author
-
Feng Chen 0022, Qiancong Fan, Shenlong Lou, Limin Yang, Chenxing Wang, Martin Claverie, Cheng Wang 0003, José Marcato Junior, Wesley Nunes Gonçalves, and Jonathan Li 0001
- Published
- 2020
- Full Text
- View/download PDF
47. Deep4Fusion: A Deep FORage Fusion framework for high-throughput phenotyping for green and dry matter yield traits.
- Author
-
Lucas de Souza Rodrigues, Edmar Caixeta Filho, Kenzo Miranda Sakiyama, Mateus Figueiredo Santos, Liana Jank, Camilo Carromeu, Eloise Silveira, Edson Takashi Matsubara, José Marcato Junior, and Wesley Nunes Gonçalves
- Published
- 2023
- Full Text
- View/download PDF
48. DESCINet: A hierarchical deep convolutional neural network with skip connection for long time series forecasting.
- Author
-
Andre Quintiliano Bezerra Silva, Wesley Nunes Gonçalves, and Edson Takashi Matsubara
- Published
- 2023
- Full Text
- View/download PDF
49. MAPEAMENTO DE RIOS EM IMAGENS RGB COM APRENDIZAGEM DE MÁQUINA SUPERVISIONADA.
- Author
-
De Souza, Mariany Kerriany Gonçalves, primary, Faita Pinheiro, Mayara Maezano Faita Pinheiro Maezano, additional, Furuya, Danielle Elis Garcia Furuya Garcia, additional, Osco, Lucas Prado Osco Prado, additional, Júnior, José Marcato Júnior Marcato, additional, Gonçalves, Wesley Nunes Gonçalves Nunes, additional, and Ramos, Ana Paula Marques Ramos Marques, additional
- Published
- 2024
- Full Text
- View/download PDF
50. Recognition of Endangered Pantanal Animal Species using Deep Learning Methods.
- Author
-
Mauro dos Santos de Arruda, Gabriel Spadon, José F. Rodrigues Jr., Wesley Nunes Gonçalves, and Bruno Brandoli Machado
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.