@unoeste.br
Faculty of Engineering and Architecture and Urbanism.
University of Western São Paulo (UNOESTE)
Currently is a Professor at the University of Western São Paulo (UNOESTE).
Environmental Engineer, Ph.D. (Environmental Technologies) and Post-Doctoral (Natural Resources) from the Federal University of Mato Grosso do Sul (UFMS).
Environmental Science, Earth and Planetary Sciences, Computer Vision and Pattern Recognition
Scopus Publications
Scholar Citations
Scholar h-index
Scholar i10-index
Michelle Taís Garcia Furuya, Danielle Elis Garcia Furuya, Lucas Yuri Dutra de Oliveira, Paulo Antonio da Silva, Rejane Ennes Cicerelli, Wesley Nunes Gonçalves, José Marcato Junior, Lucas Prado Osco, and Ana Paula Marques Ramos
Springer Science and Business Media LLC
Diogo Nunes Gonçalves, José Marcato, André Caceres Carrilho, Plabiany Rodrigo Acosta, Ana Paula Marques Ramos, Felipe David Georges Gomes, Lucas Prado Osco, Maxwell da Rosa Oliveira, José Augusto Correa Martins, Geraldo Alves Damasceno,et al.
Elsevier BV
Mário de Araújo Carvalho, José Marcato, José Augusto Correa Martins, Pedro Zamboni, Celso Soares Costa, Henrique Lopes Siqueira, Márcio Santos Araújo, Diogo Nunes Gonçalves, Danielle Elis Garcia Furuya, Lucas Prado Osco,et al.
Elsevier BV
Diogo Nunes Gonçalves, Plabiany Rodrigo Acosta, Ana Paula Marques Ramos, Lucas Prado Osco, Danielle Elis Garcia Furuya, Michelle Taís Garcia Furuya, Jonathan Li, José Marcato Junior, Hemerson Pistori, and Wesley Nunes Gonçalves
Elsevier BV
Lucas Prado Osco, Danielle Elis Garcia Furuya, Michelle Taís Garcia Furuya, Daniel Veras Corrêa, Wesley Nunes Gonçalvez, José Marcato Junior, Miguel Borges, Maria Carolina Blassioli-Moraes, Mirian Fernandes Furtado Michereff, Michely Ferreira Santos Aquino,et al.
Elsevier BV
Mauro dos Santos de Arruda, Lucas Prado Osco, Plabiany Rodrigo Acosta, Diogo Nunes Gonçalves, José Marcato Junior, Ana Paula Marques Ramos, Edson Takashi Matsubara, Zhipeng Luo, Jonathan Li, Jonathan de Andrade Silva,et al.
Elsevier BV
Patrik Olã Bressan, José Marcato Junior, José Augusto Correa Martins, Maximilian Jaderson de Melo, Diogo Nunes Gonçalves, Daniel Matte Freitas, Ana Paula Marques Ramos, Michelle Taís Garcia Furuya, Lucas Prado Osco, Jonathan de Andrade Silva,et al.
Elsevier BV
Maximilian Jaderson de Melo, Diogo Nunes Gonçalves, Marina de Nadai Bonin Gomes, Gedson Faria, Jonathan de Andrade Silva, Ana Paula Marques Ramos, Lucas Prado Osco, Michelle Taís Garcia Furuya, José Marcato Junior, and Wesley Nunes Gonçalves
Elsevier BV
Ana Paula Marques Ramos, Felipe David Georges Gomes, Mayara Maezano Faita Pinheiro, Danielle Elis Garcia Furuya, Wesley Nunes Gonçalvez, José Marcato Junior, Mirian Fernandes Furtado Michereff, Maria Carolina Blassioli-Moraes, Miguel Borges, Raúl Alberto Alaumann,et al.
Springer Science and Business Media LLC
Farimah Bakhshizadeh, Sarah Fatholahi, Lucas Prado Osco, José Marcato Junior, and Jonathan Li
Canadian Science Publishing
Air pollution is a significant global problem that affects climate, human, and ecosystem health. Traffic emissions are a major source of atmospheric pollution in large cities. The aim of this research was to support air quality analysis by spatially modelling traffic-induced air pollution dispersion in urban areas at the street level. The dispersion model called the Graz Lagrangian model (GRAL model) was adapted to determine the NOx concentration level based on traffic, meteorology, buildings, and street configuration data in one of Tehran’s high traffic routes. In this case, meteorological parameters such as wind speed and direction were considered significant factors. Later, using local and general auto-correlation analyses, temporal and spatial variations in the concentration of NOx were measured at different altitudes. The results showed that the average output concentration of NOx pollutants at different altitudes ranges from 64.5 to 426.6 ppb. The resulting Moran index equals to 0.7–0.9 which indicates a high level of positive spatial auto-correlation. The analysis of the local Moran index represents the overcame pollution clusters with high levels of concentration at low to medium heights and the rise in clusters with low pollution at higher heights, while there is no clear clustering in the middle sections. In addition, the study of pollutant concentration variations over time has shown that pollution peaks occur at 07:00–08:00 and 21:00–22:00.
Diego de Castro Rodrigues, Vilson Siqueira, Fabiano Tavares, Márcio Lima, Frederico Oliveira, Lucas Osco, Wilmar Junior, Ronaldo Costa, and Rommel Barbosa
Springer Singapore
Danielle Elis Garcia Furuya, Lingfei Ma, Mayara Maezano Faita Pinheiro, Felipe David Georges Gomes, Wesley Nunes Gonçalvez, José Marcato Junior, Diego de Castro Rodrigues, Maria Carolina Blassioli-Moraes, Mirian Fernandes Furtado Michereff, Miguel Borges,et al.
Elsevier BV
José Augusto Correa Martins, Geazy Menezes, Wesley Gonçalves, Diego André Sant’Ana, Lucas Prado Osco, Veraldo Liesenberg, Jonathan Li, Lingfei Ma, Paulo Tarso Oliveira, Gilberto Astolfi,et al.
Elsevier BV
Luciene Sales Dagher Arce, Lucas Prado Osco, Mauro dos Santos de Arruda, Danielle Elis Garcia Furuya, Ana Paula Marques Ramos, Camila Aoki, Arnildo Pott, Sarah Fatholahi, Jonathan Li, Fábio Fernando de Araújo,et al.
Springer Science and Business Media LLC
AbstractAccurately mapping individual tree species in densely forested environments is crucial to forest inventory. When considering only RGB images, this is a challenging task for many automatic photogrammetry processes. The main reason for that is the spectral similarity between species in RGB scenes, which can be a hindrance for most automatic methods. This paper presents a deep learning-based approach to detect an important multi-use species of palm trees (Mauritia flexuosa; i.e., Buriti) on aerial RGB imagery. In South-America, this palm tree is essential for many indigenous and local communities because of its characteristics. The species is also a valuable indicator of water resources, which comes as a benefit for mapping its location. The method is based on a Convolutional Neural Network (CNN) to identify and geolocate singular tree species in a high-complexity forest environment. The results returned a mean absolute error (MAE) of 0.75 trees and an F1-measure of 86.9%. These results are better than Faster R-CNN and RetinaNet methods considering equal experiment conditions. In conclusion, the method presented is efficient to deal with a high-density forest scenario and can accurately map the location of single species like the M. flexuosa palm tree and may be useful for future frameworks.
Paulo Eduardo Teodoro, Larissa Pereira Ribeiro Teodoro, Fábio Henrique Rojo Baio, Carlos Antonio da Silva Junior, Regimar Garcia dos Santos, Ana Paula Marques Ramos, Mayara Maezano Faita Pinheiro, Lucas Prado Osco, Wesley Nunes Gonçalves, Alexsandro Monteiro Carneiro,et al.
MDPI AG
In soybean, there is a lack of research aiming to compare the performance of machine learning (ML) and deep learning (DL) methods to predict more than one agronomic variable, such as days to maturity (DM), plant height (PH), and grain yield (GY). As these variables are important to developing an overall precision farming model, we propose a machine learning approach to predict DM, PH, and GY for soybean cultivars based on multispectral bands. The field experiment considered 524 genotypes of soybeans in the 2017/2018 and 2018/2019 growing seasons and a multitemporal–multispectral dataset collected by embedded sensor in an unmanned aerial vehicle (UAV). We proposed a multilayer deep learning regression network, trained during 2000 epochs using an adaptive subgradient method, a random Gaussian initialization, and a 50% dropout in the first hidden layer for regularization. Three different scenarios, including only spectral bands, only vegetation indices, and spectral bands plus vegetation indices, were adopted to infer each variable (PH, DM, and GY). The DL model performance was compared against shallow learning methods such as random forest (RF), support vector machine (SVM), and linear regression (LR). The results indicate that our approach has the potential to predict soybean-related variables using multispectral bands only. Both DL and RF models presented a strong (r surpassing 0.77) prediction capacity for the PH variable, regardless of the adopted input variables group. Our results demonstrated that the DL model (r = 0.66) was superior to predict DM when the input variable was the spectral bands. For GY, all machine learning models evaluated presented similar performance (r ranging from 0.42 to 0.44) for each tested scenario. In conclusion, this study demonstrated an efficient approach to a computational solution capable of predicting multiple important soybean crop variables based on remote sensing data. Future research could benefit from the information presented here and be implemented in subsequent processes related to soybean cultivars or other types of agronomic crops.
Diego Bedin Marin, Gabriel Araújo e Silva Ferraz, Lucas Santos Santana, Brenon Diennevan Souza Barbosa, Rafael Alexandre Pena Barata, Lucas Prado Osco, Ana Paula Marques Ramos, and Paulo Henrique Sales Guimarães
Elsevier BV
Abstract Coffee leaf rust (CLR) is one of the most devastating leaf diseases in coffee plantations. By knowing the symptoms, severity, and spatial distribution of CLR, farmers can improve disease management procedures and reduce losses associated with it. Recently, Unmanned Aerial Vehicles (UAVs)-based images, in conjunction with machine learning (ML) techniques, helped solve multiple agriculture-related problems. In this sense, vegetation indices processed with ML algorithms are a promising strategy. It is still a challenge to map severity levels of CLR using remote sensing data and an ML approach. Here we propose a framework to detect CLR severity with only vegetation indices extracted from UAV imagery. For that, we based our approach on decision tree models, as they demonstrated important results in related works. We evaluated a coffee field with different infestation classes of CLR: class 1 (from 2% to 5% rust); class 2 (from 5% to 10% rust); class 3 (from 10% to 20% rust), and; class 4 (from 20% to 40% rust). We acquired data with a Sequoia camera, producing images with a spatial resolution of 10.6 cm, in four spectral bands: green (530–570 nm), red (640–680 nm), red-edge (730–740 nm), and near-infrared (770–810 nm). A total of 63 vegetation indices was extracted from the images, and the following learners were evaluated in a cross-validation method with 10 folders: Logistic Model Tree (LMT); J48; ExtraTree; REPTree; Functional Trees (FT); Random Tree (RT), and; Random Forest (RF). The results indicated that the LMT method contributed the most to the accurate prediction of early and several infestation classes. For these classes, LMT returned F-measure values of 0.915 and 0.875, thus being a good indicator of early CLR (2 to 5% of rust) and later stages of CLR (20 to 40% of rust). We demonstrated a valid approach to model rust in coffee plants using only vegetation indices and ML algorithms, specifically for the disease's early and later stages. We concluded that the proposed framework allows inferring the predicted classes in remaining plants within the sampled area, thus helping the identification of potential CLR in non-sampled plants. We corroborate that the decision tree-based model may assist in precision agriculture practices, including mapping rust in coffee plantations, providing both an efficient non-invasive and spatially continuous monitoring of the disease.
Lucas Prado Osco, José Marcato Junior, Ana Paula Marques Ramos, Lúcio André de Castro Jorge, Sarah Narges Fatholahi, Jonathan de Andrade Silva, Edson Takashi Matsubara, Hemerson Pistori, Wesley Nunes Gonçalves, and Jonathan Li
Elsevier BV
Deep Neural Networks (DNNs) learn representation from data with an impressive capability, and brought important breakthroughs for processing images, time-series, natural language, audio, video, and many others. In the remote sensing field, surveys and literature revisions specifically involving DNNs algorithms’ applications have been conducted in an attempt to summarize the amount of information produced in its subfields. Recently, Unmanned Aerial Vehicles (UAV) based applications have dominated aerial sensing research. However, a literature revision that combines both “deep learning” and “UAV remote sensing” thematics has not yet been conducted. The motivation for our work was to present a comprehensive review of the fundamentals of Deep Learning (DL) applied in UAV-based imagery. We focused mainly on describing classification and regression techniques used in recent applications with UAV-acquired data. For that, a total of 232 papers published in international scientific journal databases was examined. We gathered the published material and evaluated their characteristics regarding application, sensor, and technique used. We relate how DL presents promising results and has the potential for processing tasks associated with UAV-based image data. Lastly, we project future perspectives, commentating on prominent DL paths to be explored in the UAV remote sensing field. Our revision consists of a friendly-approach to introduce, commentate, and summarize the state-of-the-art in UAV-based image applications with DNNs algorithms in diverse subfields of remote sensing, grouping it in the environmental, urban, and agricultural contexts.
José Augusto Correa Martins, Keiller Nogueira, Lucas Prado Osco, Felipe David Georges Gomes, Danielle Elis Garcia Furuya, Wesley Nunes Gonçalves, Diego André Sant’Ana, Ana Paula Marques Ramos, Veraldo Liesenberg, Jefersson Alex dos Santos,et al.
MDPI AG
Urban forests are an important part of any city, given that they provide several environmental benefits, such as improving urban drainage, climate regulation, public health, biodiversity, and others. However, tree detection in cities is challenging, given the irregular shape, size, occlusion, and complexity of urban areas. With the advance of environmental technologies, deep learning segmentation mapping methods can map urban forests accurately. We applied a region-based CNN object instance segmentation algorithm for the semantic segmentation of tree canopies in urban environments based on aerial RGB imagery. To the best of our knowledge, no study investigated the performance of deep learning-based methods for segmentation tasks inside the Cerrado biome, specifically for urban tree segmentation. Five state-of-the-art architectures were evaluated, namely: Fully Convolutional Network; U-Net; SegNet; Dynamic Dilated Convolution Network and DeepLabV3+. The experimental analysis showed the effectiveness of these methods reporting results such as pixel accuracy of 96,35%, an average accuracy of 91.25%, F1-score of 91.40%, Kappa of 82.80% and IoU of 73.89%. We also determined the inference time needed per area, and the deep learning methods investigated after the training proved to be suitable to solve this task, providing fast and effective solutions with inference time varying from 0.042 to 0.153 minutes per hectare. We conclude that the semantic segmentation of trees inside urban environments is highly achievable with deep neural networks. This information could be of high importance to decision-making and may contribute to the management of urban systems. It should be also important to mention that the dataset used in this work is available on our website.
Lucas Prado Osco, Keiller Nogueira, Ana Paula Marques Ramos, Mayara Maezano Faita Pinheiro, Danielle Elis Garcia Furuya, Wesley Nunes Gonçalves, Lucio André de Castro Jorge, José Marcato Junior, and Jefersson Alex dos Santos
Springer Science and Business Media LLC
Accurately mapping farmlands is important for precision agriculture practices. Unmanned aerial vehicles (UAV) embedded with multispectral cameras are commonly used to map plants in agricultural landscapes. However, separating plantation fields from the remaining objects in a multispectral scene is a difficult task for traditional algorithms. In this connection, deep learning methods that perform semantic segmentation could help improve the overall outcome. In this study, state-of-the-art deep learning methods to semantic segment citrus-trees in multispectral images were evaluated. For this purpose, a multispectral camera that operates at the green (530–570 nm), red (640–680 nm), red-edge (730–740 nm) and also near-infrared (770–810 nm) spectral regions was used. The performance of the following five state-of-the-art pixelwise methods were evaluated: fully convolutional network (FCN), U-Net, SegNet, dynamic dilated convolution network (DDCN) and DeepLabV3 + . The results indicated that the evaluated methods performed similarly in the proposed task, returning F1-Scores between 94.00% (FCN and U-Net) and 94.42% (DDCN). It was also determined the inference time needed per area and, although the DDCN method was slower, based on a qualitative analysis, it performed better in highly shadow-affected areas. This study demonstrated that the semantic segmentation of citrus orchards is highly achievable with deep neural networks. The state-of-the-art deep learning methods investigated here proved to be equally suitable to solve this task, providing fast solutions with inference time varying from 0.98 to 4.36 min per hectare. This approach could be incorporated into similar research, and contribute to decision-making and accurate mapping of plantation fields.
Gabriel Silva de Oliveira, José Marcato Junior, Caio Polidoro, Lucas Prado Osco, Henrique Siqueira, Lucas Rodrigues, Liana Jank, Sanzio Barrios, Cacilda Valle, Rosângela Simeão,et al.
MDPI AG
Forage dry matter is the main source of nutrients in the diet of ruminant animals. Thus, this trait is evaluated in most forage breeding programs with the objective of increasing the yield. Novel solutions combining unmanned aerial vehicles (UAVs) and computer vision are crucial to increase the efficiency of forage breeding programs, to support high-throughput phenotyping (HTP), aiming to estimate parameters correlated to important traits. The main goal of this study was to propose a convolutional neural network (CNN) approach using UAV-RGB imagery to estimate dry matter yield traits in a guineagrass breeding program. For this, an experiment composed of 330 plots of full-sib families and checks conducted at Embrapa Beef Cattle, Brazil, was used. The image dataset was composed of images obtained with an RGB sensor embedded in a Phantom 4 PRO. The traits leaf dry matter yield (LDMY) and total dry matter yield (TDMY) were obtained by conventional agronomic methodology and considered as the ground-truth data. Different CNN architectures were analyzed, such as AlexNet, ResNeXt50, DarkNet53, and two networks proposed recently for related tasks named MaCNN and LF-CNN. Pretrained AlexNet and ResNeXt50 architectures were also studied. Ten-fold cross-validation was used for training and testing the model. Estimates of DMY traits by each CNN architecture were considered as new HTP traits to compare with real traits. Pearson correlation coefficient r between real and HTP traits ranged from 0.62 to 0.79 for LDMY and from 0.60 to 0.76 for TDMY; root square mean error (RSME) ranged from 286.24 to 366.93 kg·ha−1 for LDMY and from 413.07 to 506.56 kg·ha−1 for TDMY. All the CNNs generated heritable HTP traits, except LF-CNN for LDMY and AlexNet for TDMY. Genetic correlations between real and HTP traits were high but varied according to the CNN architecture. HTP trait from ResNeXt50 pretrained achieved the best results for indirect selection regardless of the dry matter trait. This demonstrates that CNNs with remote sensing data are highly promising for HTP for dry matter yield traits in forage breeding programs.
Lucas Prado Osco, Mauro dos Santos de Arruda, Diogo Nunes Gonçalves, Alexandre Dias, Juliana Batistoti, Mauricio de Souza, Felipe David Georges Gomes, Ana Paula Marques Ramos, Lúcio André de Castro Jorge, Veraldo Liesenberg,et al.
Elsevier BV
Accurately mapping croplands is an important prerequisite for precision farming since it assists in field management, yield-prediction, and environmental management. Crops are sensitive to planting patterns and some have a limited capacity to compensate for gaps within a row. Optical imaging with sensors mounted on Unmanned Aerial Vehicles (UAV) is a cost-effective option for capturing images covering croplands nowadays. However, visual inspection of such images can be a challenging and biased task, specifically for detecting plants and rows on a one-step basis. Thus, developing an architecture capable of simultaneously extracting plant individually and plantation-rows from UAV-images is yet an important demand to support the management of agricultural systems. In this paper, we propose a novel deep learning method based on a Convolutional Neural Network (CNN) that simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations. The experimental setup was evaluated in (a) a cornfield (Zea mays L.) with different growth stages (i.e. recently planted and mature plants) and in a (b) Citrus orchard (Citrus Sinensis Pera). Both datasets characterize different plant density scenarios, in different locations, with different types of crops, and from different sensors and dates. This scheme was used to prove the robustness of the proposed approach, allowing a broader discussion of the method. A two-branch architecture was implemented in our CNN method, where the information obtained within the plantation-row is updated into the plant detection branch and retro-feed to the row branch; which are then refined by a Multi-Stage Refinement method. In the corn plantation datasets (with both growth phases – young and mature), our approach returned a mean absolute error (MAE) of 6.224 plants per image patch, a mean relative error (MRE) of 0.1038, precision and recall values of 0.856, and 0.905, respectively, and an F-measure equal to 0.876. These results were superior to the results from other deep networks (HRNet, Faster R-CNN, and RetinaNet) evaluated with the same task and dataset. For the plantation-row detection, our approach returned precision, recall, and F-measure scores of 0.913, 0.941, and 0.925, respectively. To test the robustness of our model with a different type of agriculture, we performed the same task in the citrus orchard dataset. It returned an MAE equal to 1.409 citrus-trees per patch, MRE of 0.0615, precision of 0.922, recall of 0.911, and F-measure of 0.965. For the citrus plantation-row detection, our approach resulted in precision, recall, and F-measure scores equal to 0.965, 0.970, and 0.964, respectively. The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops. The method proposed here may be applied to future decision-making models and could contribute to the sustainable management of agricultural systems.
Jose Marcato Junior, Pedro Zamboni, Mariana Campos, Ana Ramos, Lucas Osco, Jonathan Silva, Wesley Goncalves, and Jonathan Li
IEEE
The integration of photogrammetry and deep learning methods can be powerful for Earth observation applications. Photogrammetry techniques allow the achievement of detailed geospatial products with em-level positional accuracy. Deep learning enables automatic image classification, segmentation, and object detection. For instance, when dealing with a large data set, photogrammetric processing steps, such as image orientation and dense point cloud generation, results in high computational costs. In contrast, deep learning methods are fast in the inference step. Here, we explore the complementarity of deep learning and photogrammetry, aiming to generate accurate and fast geospatial information. The main aim is to discuss the possibilities of using deep learning in the photogrammetric process. We conduct experiments to present the potential of the Mask R-CNN method trained on the COCO dataset to generate masks, essential to remove image observations from moving objects during the orientation (alignment) step.
Isabela Mello Silva, Danilo Jefferson Romero, Clécia Cristina Barbosa Guimarães, Marcelo Rodrigo Alves, Lucas Prado Osco, Arnaldo Barros e Souza, Alvaro Pires da Silva, and José A.M. Demattê
Informa UK Limited
Ana Karina Vieira da Silva, Marcus Vinicius Vieira Borges, Tays Silva Batista, Carlos Antonio da Silva Junior, Danielle Elis Garcia Furuya, Lucas Prado Osco, Larissa Pereira Ribeiro Teodoro, Fábio Henrique Rojo Baio, Ana Paula Marques Ramos, Wesley Nunes Gonçalves,et al.
MDPI AG
Machine learning techniques (ML) have gained attention in precision agriculture practices since they efficiently address multiple applications, like estimating the growth and yield of trees in forest plantations. The combination between ML algorithms and spectral vegetation indices (VIs) from high-spatial-resolution line measurement, segment: 0.079024 m multispectral imagery, could optimize the prediction of these biometric variables. In this paper, we investigate the performance of ML techniques and VIs acquired with an unnamed aerial vehicle (UAV) to predict the diameter at breast height (DBH) and total height (Ht) of eucalyptus trees. An experimental site with six eucalyptus species was selected, and the Parrot Sequoia sensor was used. Several ML techniques were evaluated, like random forest (RF), REPTree (DT), alternating model tree (AT,) k-nearest neighbor (KNN), support vector machine (SVM), artificial neural network (ANN), linear regression (LR), and radial basis function (RBF). Each algorithm performance was verified using the correlation coefficient (r) and the mean absolute error (MAE). We used, as input, 34 VIs as numeric variables to predict DHB and Ht. We also added to the model a categorical variable as input identifying the different eucalyptus trees species. The RF technique obtained an overall superior estimation for all the tested configurations. Still, the RBF also showed a higher performance for predicting DHB, numerically surpassing the RF both in r and MAE, in some cases. For Ht variable, the technique that obtained the smallest MAE was SVM, though in a particular test. In this regard, we conclude that a combination of ML and VIs extracted from UAV-based imagery is suitable to estimate DBH and Ht in eucalyptus species. The approach presented constitutes an interesting contribution to the inventory and management of planted forests.
Mayara Maezano Faita Pinheiro, Lucas Prado Osco, Tatiana Sussel Gonçalves Mendes, Rejane Ennes Cicerelli, and Ana Paula Marques Ramos
Revista Brasileira de Geografia Fisica