722 results on '"Mallet, C."'
Search Results
52. L’évaluation de la douleur chez l’animal de laboratoire
- Author
-
Courteix, C., primary, Eschalier, A., additional, and Mallet, C., additional
- Published
- 2021
- Full Text
- View/download PDF
53. A marked point process for modeling lidar waveforms
- Author
-
Mallet, C., Lafarge, F., Soergel, U., Roux, M., Bretar, F., and Heipke, C.
- Subjects
Signal processing -- Innovations ,Distribution (Probability theory) -- Usage ,Waveforms -- Usage ,Markov processes -- Usage ,Monte Carlo method -- Usage ,Digital signal processor ,Business ,Computers ,Electronics ,Electronics and electrical industries - Published
- 2010
54. PREFACE: THE 2022 EDITION OF THE XXIVTH ISPRS CONGRESS.
- Author
-
Landrieu, L., Rupnik, E., Oude Elberink, S., Mallet, C., and Paparoditis, N.
- Subjects
DEEP learning ,TIME series analysis ,BUILDING inspection - Published
- 2022
- Full Text
- View/download PDF
55. A Framework for Considering Education and Communication on Geological Subsurface Exploitation for Energy Transition
- Author
-
Ah-tchine, E., primary, Mallet, C., additional, Azaroual, M., additional, and Jammes, L., additional
- Published
- 2021
- Full Text
- View/download PDF
56. Characterizing the vadose zone transport dynamics by using a multi-method hydrogeophysical approach
- Author
-
Abbas, M., primary, Mallet, C., additional, Jodry, C., additional, Baltassat, J., additional, Deparis, J., additional, Isch, A., additional, and Azaroual, M., additional
- Published
- 2021
- Full Text
- View/download PDF
57. LAND USE CLASSIFICATION USING DEEP MULTITASK NETWORKS
- Author
-
Bergado, J.R., Persello, C., Stein, A., Paparoditis, N., Mallet, C., Lafarge, F., Jiang, J., Shaker, A., Zhang, H., Liang, X., Osmanoglu, B., Soergel, U., Honkavaara, E., Scaioni, M., Zhang, J., Peled, A, Wu, L., Li, R., Yoshimura, M., Di, K., Altan, O., Abdulmuttalib, H.M., Faruque, F.S., Department of Earth Observation Science, UT-I-ITC-ACQUAL, and Faculty of Geo-Information Science and Earth Observation
- Subjects
lcsh:Applied optics. Photonics ,Very high resolution ,010504 meteorology & atmospheric sciences ,Computer science ,0211 other engineering and technologies ,Multi-task learning ,02 engineering and technology ,Land cover ,Multitask Learning ,Machine learning ,computer.software_genre ,lcsh:Technology ,01 natural sciences ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Land use ,Pixel ,lcsh:T ,business.industry ,Convolutional Networks ,Deep learning ,VHR Imagery ,lcsh:TA1501-1820 ,Random forest ,lcsh:TA1-2040 ,Land Use Classification ,Artificial intelligence ,lcsh:Engineering (General). Civil engineering (General) ,business ,computer ,Classifier (UML) - Abstract
Updated information on urban land use allows city planners and decision makers to conduct large scale monitoring of urban areas for sustainable urban growth. Remote sensing data and classification methods offer an efficient and reliable way to update such land use maps. Features extracted from land cover maps are helpful on performing a land use classification task. Such prior information can be embedded in the design of a deep learning based land use classifier by applying a multitask learning setup—simultaneously solving a land use and a land cover classification task. In this study, we explore a fully convolutional multitask network to classify urban land use from very high resolution (VHR) imagery. We experimented with three different setups of the fully convolutional network and compared it against a baseline random forest classifier. The first setup is a standard network only predicting the land use class of each pixel in the image. The second setup is a multitask network that concatenates the land use and land cover class labels in the same output layer of the network while the other setup accept as an input the land cover predictions, predicted by a subpart of the network, concatenated to the original input image patches. The two deep multitask networks outperforms the other two classifiers by at least 30% in average F1-score.
- Published
- 2020
58. Smart fusion of mobile laser scanner data with large scale topographic maps
- Author
-
Oude Elberink, S. J., Paparoditis, N., Mallet, C., Lafarge, F., Remondino, F., Toschi, I., Fuse, T., Department of Earth Observation Science, Faculty of Geo-Information Science and Earth Observation, and UT-I-ITC-ACQUAL
- Abstract
The classification of Mobile Laser Scanner (MLS) data is challenging due to the combination of high variation in point density with a high variation of object appearances. The way how objects appear in the MLS data highly depends on the speed and orientation of the mobile mapping platform and the occlusion by other vehicles. There have been many approaches dealing with the geometric and contextual appearance of MLS points, voxels and segments to classify the MLS data. We present a completely different strategy by fusing the MLS data with a large scale topographic map. Underlying assumption is that the map delivers a clear hint on what to expect in the MLS data, at its approximate location. The approach presented here first fuses polygon objects, such as road, water, terrain and buildings, with ground and non-ground MLS points. Non-ground MLS points above roads and terrain are further classified by segmenting and matching the laser points to corresponding map point objects. The segmentation parameters depend on the class of the map points. We show that the fusion process is capable of classifying MLS data and detecting changes between the map and MLS data. The segmentation algorithm is not perfect, at some occasions not all the MLS points are correctly assigned to the corresponding map object. However, it is without doubt that the proposed map fusion delivers a very rich labelled point cloud automatically, which in future work can be used as training data in deep learning approaches.
- Published
- 2020
59. Strategies to integrate IMU and LiDAR SLAM for indoor mapping
- Author
-
Karam, S., Vosselman, G., Lehtola, V.V., Paparoditis, N., Mallet, C., Lafarge, F., Hinz, S., Feitosa, R., Weinmann, M., Jutzi, B., Department of Earth Observation Science, UT-I-ITC-ACQUAL, and Faculty of Geo-Information Science and Earth Observation
- Subjects
Mobile Laser Scanning ,SLAM ,IMU ,Indoor Mapping ,6DOF Pose Estimation - Abstract
In recent years, the importance of indoor mapping increased in a wide range of applications, such as facility management and mapping hazardous sites. The essential technique behind indoor mapping is simultaneous localization and mapping (SLAM) because SLAM offers suitable positioning estimates in environments where satellite positioning is not available. State-of-the-art indoor mobile mapping systems employ Visual-based SLAM or LiDAR-based SLAM. However, Visual-based SLAM is sensitive to textureless environments and, similarly, LiDAR-based SLAM is sensitive to a number of pose configurations where the geometry of laser observations is not strong enough to reliably estimate the six-degree-of-freedom (6DOF) pose of the system. In this paper, we present different strategies that utilize the benefits of the inertial measurement unit (IMU) in the pose estimation and support LiDAR-based SLAM in overcoming these problems. The proposed strategies have been implemented and tested using different datasets and our experimental results demonstrate that the proposed methods do indeed overcome these problems. We conclude that IMU observations increase the robustness of SLAM, which is expected, but also that the best reconstruction accuracy is obtained not with a blind use of all observations but by filtering the measurements with a proposed reliability measure. To this end, our results show promising improvements in reconstruction accuracy.
- Published
- 2020
60. Polarimetric calibration of l-band airborne sar data
- Author
-
Maiti, A., Kumar, S., Tolpekin, V., Agarwal, S., Paparoditis, N., Mallet, C., Lafarge, F., Hinz, S., Feitosa, R., Weinmann, M., Jutzi, B., Department of Earth Observation Science, UT-I-ITC-ACQUAL, and Faculty of Geo-Information Science and Earth Observation
- Abstract
The PolSAR calibration ensures that the relationship between the SAR observations and the target characteristics on the ground are consistent and resembles the theoretical estimation which in turn improves the overall data quality. Essentially, calibration prevents the propagation of uncertainty into further analysis to characterise the target. In this study, the UAVSAR L-Band data of Rosamond dry lake bed has been calibrated. The calibration of amplitude and phase are carried out with the help of the corner reflector array present in the Rosamond site. The dataset is further calibrated for the crosstalk and channel imbalance using the Quegan’s distortion model. Since the crosstalk distortion model requires an accurate estimation of the covariance matrix, the optimal kernel size for the its computation is selected based on the distortion model behaviour with varying window sizes. Furthermore, the effectiveness of the calibration process has been studied using polarimetric signatures and other statistical measures.
- Published
- 2020
61. Efficient training of semantic point cloud segmentation via active learning
- Author
-
Lin, Y., Vosselman, G., Cao, Y., Yang, M. Y., Paparoditis, N., Mallet, C., Lafarge, F., Remondino, F., Toschi, I., Fuse, T., Department of Earth Observation Science, Faculty of Geo-Information Science and Earth Observation, and UT-I-ITC-ACQUAL
- Abstract
With the development of LiDAR and photogrammetric techniques, more and more point clouds are available with high density and in large areas. Point cloud interpretation is an important step before many real applications like 3D city modelling. Many supervised machine learning techniques have been adapted to semantic point cloud segmentation, aiming to automatically label point clouds. Current deep learning methods have shown their potentials to produce high accuracy in semantic point cloud segmentation tasks. However, these supervised methods require a large amount of labelled data for proper model performance and good generalization. In practice, manual labelling of point clouds is very expensive and time-consuming. Active learning can iteratively select unlabelled samples for manual annotation based on current statistical models and then update the labelled data pool for next model training. In order to effectively label point clouds, we proposed a segment based active learning strategy to assess the informativeness of samples. Here, the proposed strategy uses 40% of the whole training dataset to achieve a mean IoU of 75.2% which is 99.1% of the accuracy in mIoU obtained from the model trained on the full dataset, while the baseline method using same amount of data only reaches 69.6% in mIoU corresponding to 90.9% of the accuracy in mIoU obtained from the model trained on the full dataset.
- Published
- 2020
62. Infrastructure degradation and post-disaster damage detection using anomaly detecting Generative Adversarial Networks
- Author
-
Tilon, S. M., Nex, F., Duarte, D., Kerle, N., Vosselman, G., Paparoditis, N., Mallet, C., Lafarge, F., Remondino, F., Toschi, I., Fuse, T., UT-I-ITC-4DEarth, Faculty of Geo-Information Science and Earth Observation, Department of Earth Systems Analysis, Department of Earth Observation Science, and UT-I-ITC-ACQUAL
- Subjects
Generative adversarial networks ,infrastructure monitoring ,post-disaster ,damage ,anomaly detection ,degradation - Abstract
Degradation and damage detection provides essential information to maintenance workers in routine monitoring and to first responders in post-disaster scenarios. Despite advance in Earth Observation (EO), image analysis and deep learning techniques, the quality and quantity of training data for deep learning is still limited. As a result, no robust method has been found yet that can transfer and generalize well over a variety of geographic locations and typologies of damages. Since damages can be seen as anomalies, occurring sparingly over time and space, we propose to use an anomaly detecting Generative Adversarial Network (GAN) to detect damages. The main advantages of using GANs are that only healthy unannotated images are needed, and that a variety of damages, including the never before seen damage, can be detected. In this study we aimed to investigate 1) the ability of anomaly detecting GANs to detect degradation (potholes and cracks) in asphalt road infrastructures using Mobile Mapper imagery and building damage (collapsed buildings, rubble piles) using post-disaster aerial imagery, and 2) the sensitivity of this method against various types of pre-processing. Our results show that we can detect damages in urban scenes at satisfying levels but not on asphalt roads. Future work will investigate how to further classify the found damages and how to improve damage detection for asphalt roads.
- Published
- 2020
63. Robust resection model for aligning the mobile mapping systems trajectories at degraded and denied urban environments
- Author
-
Alsadik, B., Paparoditis, N., Mallet, C., Lafarge, F., Hinz, S., Feitosa, R., Weinmann, M., Jutzi, B., Faculty of Geo-Information Science and Earth Observation, UT-I-ITC-ACQUAL, and Department of Earth Observation Science
- Subjects
image resection ,exterior orientation ,oblique angle ,equirectangular image ,mobile mapping systems ,camera pose - Abstract
Mobile mapping systems MMS equipped with cameras and laser scanners are widely used nowadays for different geospatial applications with centimetric accuracy either in project wise or national wise scales. The achieved positioning accuracy is very much related to the navigation unit, namely the GNSS and IMU onboard. Accordingly, in GNSS denied and degraded environments, the absolute positioning accuracy is worsened to few meters in some cases. Frequently, ground control points GCPs of a high positioning accuracy are used to align the MMS trajectories and to improve the accuracy when needed. The best way to integrate the MMS trajectories to the GCPs is by measuring them on the MMS images where the positioning accuracy is dropped. MMS images are mostly spherical panoramic (equirectangular) images and sometimes perspective and, in both types, it is required to precisely determine the images orientation in what is called as space resection or camera pose determination. For perspective images, the pose is conventionally determined by collinearity equations or by using projection and fundamental matrices. Whereas for equirectangular panoramic images it is based on resecting vertical and horizontal angles. However, there is still a challenge in the state–of–the–art of image pose determination because of the model nonlinearity and the sensitivity to proper initialization and spatial distribution of the points. In this research, a generic method is presented to solve the pose resection problem for the perspective and equirectangular images using oblique angles. The oblique angles are derived from the measured image coordinates and based on spherical trigonometry rules and vector geometry. The developed algorithm has proven to be highly stable and steadily converge to the global minimum. This is related to the robust geometric constraint offered by the oblique angles that are enclosed between the object points and the camera. As a result, the MMS trajectories are realigned accurately to the GCPs and the absolute accuracy is highly refined. Four experimental tests are presented where the results show the efficiency of the proposed angular based model in different cases of simulated and real data with different image types
- Published
- 2020
64. Automated Co-Registration of Intra-Epoch and Inter-Epoch Series of Multispectral Uav Images for Crop Monitoring
- Author
-
Mc'okeyo, P. O., Nex, F., Persello, C., Vrieling, A., Paparoditis, N., Mallet, C., Lafarge, F., Hinz, S., Feitosa, R., Weinmann, M., Jutzi, B., Department of Earth Observation Science, UT-I-ITC-ACQUAL, Faculty of Geo-Information Science and Earth Observation, UT-I-ITC-FORAGES, and Department of Natural Resources
- Subjects
Co-registration ,Multispectral ,Inter-epoch ,Orthophoto ,Image Matching ,Intra-epoch ,Unmanned Aerial Vehicles - Abstract
The application of UAV-based aerial imagery has advanced exponentially in the past two decades. This can be attributed to UAV operational flexibility, ultra-high spatial resolution, inexpensiveness, and UAV-based sensors enhancement. Nonetheless, the application of multitemporal series of multispectral UAV imagery still suffers significant misregistration errors, and therefore becoming a concern for applications such as precision agriculture. Direct image georeferencing and co-registration is commonly done using ground control points; this is usually costly and time consuming. This research proposes a novel approach for automatic co-registration of multitemporal UAV imagery using intensity-based keypoints. The Speeded Up Robust Features (SURF), Binary Robust Invariant Scalable Keypoints (BRISK), Maximally Stable Extremal Regions (MSER) and KAZE algorithms, were tested and parameters optimized. Image matching performance of these algorithms informed the decision to pursue further experiments with only SURF and KAZE. Optimally parametrized SURF and KAZE algorithms obtained co-registration accuracies of 0.1 and 0.3 pixels for intra-epoch and inter-epoch images respectively. To obtain better intra-epoch co-registration accuracy, collective band processing is advised whereas one-to-one matching strategy is recommended for inter-epoch co-registration. The results were tested using a maize crop monitoring case and the; comparison of spectral response of vegetation between the UAV sensors, Parrot Sequoia and Micro MCA was performed. Due to the missing incidence sensor, spectral and radiometric calibration of Micro MCA imagery is observed to be key in achieving optimal response. Also, the cameras have different specifications and thus differ in the quality of their respective photogrammetric outputs.
- Published
- 2020
65. Classification of tree species and standing dead trees by fusing UAV-based lidar data and multispectral imagery in the 3D deep neural network PointNet++
- Author
-
Briechle, S., Krzystek, Peter, Vosselman, G., Paparoditis, N., Mallet, C., Lafarge, F., Remondino, F., Toschi, I., Fuse, T., Department of Earth Observation Science, UT-I-ITC-ACQUAL, and Faculty of Geo-Information Science and Earth Observation
- Abstract
Knowledge of tree species mapping and of dead wood in particular is fundamental to managing our forests. Although individual tree-based approaches using lidar can successfully distinguish between deciduous and coniferous trees, the classification of multiple tree species is still limited in accuracy. Moreover, the combined mapping of standing dead trees after pest infestation is becoming increasingly important. New deep learning methods outperform baseline machine learning approaches and promise a significant accuracy gain for tree mapping. In this study, we performed a classification of multiple tree species (pine, birch, alder) and standing dead trees with crowns using the 3D deep neural network (DNN) PointNet++ along with UAV-based lidar data and multispectral (MS) imagery. Aside from 3D geometry, we also integrated laser echo pulse width values and MS features into the classification process. In a preprocessing step, we generated the 3D segments of single trees using a 3D detection method. Our approach achieved an overall accuracy (OA) of 90.2% and was clearly superior to a baseline method using a random forest classifier and handcrafted features (OA = 85.3%). All in all, we demonstrate that the performance of the 3D DNN is highly promising for the classification of multiple tree species and standing dead trees in practice.
- Published
- 2020
66. USING SEMANTICALLY PAIRED IMAGES TO IMPROVE DOMAIN ADAPTATION FOR THE SEMANTIC SEGMENTATION OF AERIAL IMAGES
- Author
-
Gritzner, D., Ostermann, J., Paparoditis, N., Mallet, C., Lafarge, F., Remondino, F., Toschi, I., and Fuse, T.
- Subjects
lcsh:Applied optics. Photonics ,Dewey Decimal Classification::500 | Naturwissenschaften::550 | Geowissenschaften ,Computer science ,domain adaptation ,0211 other engineering and technologies ,02 engineering and technology ,transfer learning ,lcsh:Technology ,Domain (software engineering) ,Semantic similarity ,0202 electrical engineering, electronic engineering, information engineering ,ddc:550 ,Segmentation ,Limit (mathematics) ,Konferenzschrift ,021101 geological & geomatics engineering ,Ground truth ,Artificial neural network ,lcsh:T ,business.industry ,Deep learning ,lcsh:TA1501-1820 ,deep learning ,Pattern recognition ,aerial images ,neural networks ,semantic segmentation ,lcsh:TA1-2040 ,020201 artificial intelligence & image processing ,Artificial intelligence ,lcsh:Engineering (General). Civil engineering (General) ,Transfer of learning ,business - Abstract
Modern machine learning, especially deep learning, which is used in a variety of applications, requires a lot of labelled data for model training. Having an insufficient amount of training examples leads to models which do not generalize well to new input instances. This is a particular significant problem for tasks involving aerial images: often training data is only available for a limited geographical area and a narrow time window, thus leading to models which perform poorly in different regions, at different times of day, or during different seasons. Domain adaptation can mitigate this issue by using labelled source domain training examples and unlabeled target domain images to train a model which performs well on both domains. Modern adversarial domain adaptation approaches use unpaired data. We propose using pairs of semantically similar images, i.e., whose segmentations are accurate predictions of each other, for improved model performance. In this paper we show that, as an upper limit based on ground truth, using semantically paired aerial images during training almost always increases model performance with an average improvement of 4.2% accuracy and .036 mean intersection-over-union (mIoU). Using a practical estimate of semantic similarity, we still achieve improvements in more than half of all cases, with average improvements of 2.5% accuracy and .017 mIoU in those cases.
- Published
- 2020
67. EXPLORING SEMANTIC RELATIONSHIPS FOR HIERARCHICAL LAND USE CLASSIFICATION BASED ON CONVOLUTIONAL NEURAL NETWORKS
- Author
-
Yang, Chuan, Rottensteiner, Franz, Heipke, Christian, Paparoditis, N., Mallet, C., Lafarge, F., Remondino, F., Toschi, I., and Fuse, T.
- Subjects
Dewey Decimal Classification::500 | Naturwissenschaften::550 | Geowissenschaften ,lcsh:Applied optics. Photonics ,Geospatial analysis ,010504 meteorology & atmospheric sciences ,Computer science ,010501 environmental sciences ,computer.software_genre ,01 natural sciences ,Convolutional neural network ,lcsh:Technology ,aerial imagery ,ddc:550 ,semantic relationships ,Segmentation ,hierarchical land use classification ,Konferenzschrift ,0105 earth and related environmental sciences ,Hierarchy (mathematics) ,business.industry ,lcsh:T ,Spatial database ,lcsh:TA1501-1820 ,Pattern recognition ,Object (computer science) ,Class (biology) ,lcsh:TA1-2040 ,geospatial database ,Artificial intelligence ,business ,lcsh:Engineering (General). Civil engineering (General) ,computer ,Class hierarchy ,CNN - Abstract
Land use (LU) is an important information source commonly stored in geospatial databases. Most current work on automatic LU classification for updating topographic databases considers only one category level (e.g. residential or agricultural) consisting of a small number of classes. However, LU databases frequently contain very detailed information, using a hierarchical object catalogue where the number of categories differs depending on the hierarchy level. This paper presents a method for the classification of LU on the basis of aerial images that differentiates a fine-grained class structure, exploiting the hierarchical relationship between categories at different levels of the class catalogue. Starting from a convolutional neural network (CNN) for classifying the categories of all levels, we propose a strategy to simultaneously learn the semantic dependencies between different category levels explicitly. The input to the CNN consists of aerial images and derived data as well as land cover information derived from semantic segmentation. Its output is the class scores at three different semantic levels, based on which predictions that are consistent with the class hierarchy are made. We evaluate our method using two test sites and show how the classification accuracy depends on the semantic category level. While at the coarsest level, an overall accuracy in the order of 90% can be achieved, at the finest level, this accuracy is reduced to around 65%. Our experiments also show which classes are particularly hard to differentiate. © 2020 Copernicus GmbH. All rights reserved.
- Published
- 2020
68. DEEP LEARNING FOR MONOCULAR DEPTH ESTIMATION FROM UAV IMAGES
- Author
-
Madhuanand, L., Nex, Francesco, Yang, Michael, Paparoditis, N., Mallet, C., Lafarge, F., Remondino, F., Toschi, I., Fuse, T., and Landdegradatie en aardobservatie
- Subjects
Monocular ,Depth ,Aerial images ,Disparity ,Image reconstruction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Scene Understanding ,Deep learning - Abstract
Depth is an essential component for various scene understanding tasks and for reconstructing the 3D geometry of the scene. Estimating depth from stereo images requires multiple views of the same scene to be captured which is often not possible when exploring new environments with a UAV. To overcome this monocular depth estimation has been a topic of interest with the recent advancements in computer vision and deep learning techniques. This research has been widely focused on indoor scenes or outdoor scenes captured at ground level. Single image depth estimation from aerial images has been limited due to additional complexities arising from increased camera distance, wider area coverage with lots of occlusions. A new aerial image dataset is prepared specifically for this purpose combining Unmanned Aerial Vehicles (UAV) images covering different regions, features and point of views. The single image depth estimation is based on image reconstruction techniques which uses stereo images for learning to estimate depth from single images. Among the various available models for ground-level single image depth estimation, two models, 1) a Convolutional Neural Network (CNN) and 2) a Generative Adversarial model (GAN) are used to learn depth from aerial images from UAVs. These models generate pixel-wise disparity images which could be converted into depth information. The generated disparity maps from these models are evaluated for its internal quality using various error metrics. The results show higher disparity ranges with smoother images generated by CNN model and sharper images with lesser disparity range generated by GAN model. The produced disparity images are converted to depth information and compared with point clouds obtained using Pix4D. It is found that the CNN model performs better than GAN and produces depth similar to that of Pix4D. This comparison helps in streamlining the efforts to produce depth from a single aerial image.
- Published
- 2020
69. A hybrid global image orientation method for simultaneously estimating global rotations and global translations
- Author
-
Wang, Xin, Xiao, Teng, Kasten, Yoni, Paparoditis, N., Mallet, C., Lafarge, F., Remondino, F., Toschi, I., and Fuse, T.
- Subjects
lcsh:Applied optics. Photonics ,Dewey Decimal Classification::500 | Naturwissenschaften::550 | Geowissenschaften ,Computer science ,0211 other engineering and technologies ,global translation estimation ,02 engineering and technology ,Translation (geometry) ,lcsh:Technology ,image orientation ,Image (mathematics) ,Set (abstract data type) ,ddc:550 ,0202 electrical engineering, electronic engineering, information engineering ,Konferenzschrift ,021101 geological & geomatics engineering ,lcsh:T ,Intersection (set theory) ,Orientation (computer vision) ,Frame (networking) ,lcsh:TA1501-1820 ,global rotations estimation ,Matrix similarity ,lcsh:TA1-2040 ,020201 artificial intelligence & image processing ,lcsh:Engineering (General). Civil engineering (General) ,global structure from motion (SfM) ,Algorithm ,Rotation (mathematics) - Abstract
In recent years, the determination of global image orientation, i.e. global SfM, has gained a lot of attentions from researchers, mainly due to its time efficiency. Most of the global methods take relative rotations and translations as input for a two-step strategy comprised of global rotation averaging and global translation averaging. This paper by contrast presents a hybrid approach that aims to solve global rotations and translations simultaneously, but hierarchically. We first extract an optimal minimum cover connected image triplet set (OMCTS) which includes all available images with a minimum number of triplets, all of them with the three related relative orientations being compatible to each other. For non-collinear triplets in the OMCTS, we introduce some basic characterizations of the corresponding essential matrices and solve for the image pose parameters by averaging the constrained essential matrices. For the collinear triplets, on the other hand, the image pose parameters are estimated by relative orientation using the depth of object points from individual local spatial intersection. Finally, all image orientations are estimated in a common coordinate frame by traversing every solved triplet using a similarity transformation. We show results of our method on different benchmarks and demonstrate the performance and capability of the proposed approach by comparing with other global SfM methods.
- Published
- 2020
- Full Text
- View/download PDF
70. Semantic segmentation of Brazilian Savanna vegetation using high spatial resolution satellite data and U-net
- Author
-
Neves, A.K., Körting, T.S., Fonseca, L.M.G., Girolamo Neto, C.D., Wittich, Dennis, Costa, G.A.O.P., Heipke, Christian, Paparoditis, N., Mallet, C., and Lafarge, F.
- Subjects
lcsh:Applied optics. Photonics ,Dewey Decimal Classification::500 | Naturwissenschaften::550 | Geowissenschaften ,010504 meteorology & atmospheric sciences ,Biome ,0211 other engineering and technologies ,02 engineering and technology ,lcsh:Technology ,01 natural sciences ,Convolutional neural network ,Grassland ,Cross-validation ,remote sensing ,pixel-wise classification ,ddc:550 ,Segmentation ,Konferenzschrift ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,geography.geographical_feature_category ,lcsh:T ,lcsh:TA1501-1820 ,deep learning ,Cerrado ,Vegetation ,Geography ,biome ,lcsh:TA1-2040 ,Spatial variability ,lcsh:Engineering (General). Civil engineering (General) ,Scale (map) ,physiognomies ,Cartography - Abstract
Large-scale mapping of the Brazilian Savanna (Cerrado) vegetation using remote sensing images is still a challenge due to the high spatial variability and spectral similarity of the different characteristic vegetation types (physiognomies). In this paper, we report on semantic segmentation of the three major groups of physiognomies in the Cerrado biome (Grasslands, Savannas and Forests) using a fully convolutional neural network approach. The study area, which covers a Brazilian conservation unit, was divided into three regions to enable testing the approach in regions that were not used in the training phase. A WorldView-2 image was used in cross validation experiments, in which the average overall accuracy achieved with the pixel-wise classifications was 87.0%. The F-1 score values obtained with the approach for the classes Grassland, Savanna and Forest were of 0.81, 0.90 and 0.88, respectively. Visual assessment of the semantic segmentation outcomes was also performed and confirmed the quality of the results. It was observed that the confusion among classes occurs mainly in transition areas, where there are adjacent physiognomies if a scale of increasing density is considered, which agrees with previous studies on natural vegetation mapping for the Cerrado biome.
- Published
- 2020
71. Using redundant information from multiple aerial images for the detection of bomb craters based on marked point processes
- Author
-
Kruse, Christian, Rottensteiner, Franz, Heipke, Christian, Paparoditis, N., Mallet, C., Lafarge, F., Remondino, F., Toschi, I., and Fuse, T.
- Subjects
lcsh:Applied optics. Photonics ,Dewey Decimal Classification::500 | Naturwissenschaften::550 | Geowissenschaften ,Computer science ,Kernel density estimation ,0211 other engineering and technologies ,02 engineering and technology ,lcsh:Technology ,Point process ,multiple aerial wartime images ,marked point processes ,Impact crater ,RJMCMC ,0202 electrical engineering, electronic engineering, information engineering ,ddc:550 ,Computer vision ,Konferenzschrift ,021101 geological & geomatics engineering ,lcsh:T ,business.industry ,lcsh:TA1501-1820 ,Sampling (statistics) ,duds ,Reversible-jump Markov chain Monte Carlo ,Object detection ,lcsh:TA1-2040 ,impact maps ,Simulated annealing ,bomb craters ,Object model ,020201 artificial intelligence & image processing ,Artificial intelligence ,lcsh:Engineering (General). Civil engineering (General) ,business - Abstract
Many countries were the target of air strikes during World War II. Numerous unexploded bombs still exist in the ground. These duds can be tracked down with the help of bomb craters, indicating areas where unexploded bombs may be located. Such areas are documented in so-called impact maps based on detected bomb craters. In this paper, a stochastic approach based on marked point processes (MPPs) for the automatic detection of bomb craters in aerial images taken during World War II is presented. As most areas are covered by multiple images, the influence of redundant image information on the object detection result is investigated: we compare the results generated based on single images with those obtained by our new approach that combines the individual detection results of multiple images covering the same location. The object model for the bomb craters is represented by circles. Our MPP approach determines the most likely configuration of objects within the scene. The goal is reached by minimizing an energy function that describes the conformity with a predefined model by Reversible Jump Markov Chain Monte Carlo sampling in combination with simulated annealing. Afterwards, a probability map is generated from the automatic detections via kernel density estimation. By setting a threshold, areas around the detections are classified as contaminated or uncontaminated sites, respectively, which results in an impact map. Our results show a significant improvement with respect to its quality when redundant image information is used.
- Published
- 2020
72. Domain adaptation with cyclegan for change detection in the amazon forest
- Author
-
Soto, P.J., Costa, G.A.O.P., Feitosa, R.Q., Happ, P.N., Ortega, M.X., Noa, J., Almeida, C.A., Heipke, Christian, Paparoditis, N., Mallet, C., and Lafarge, F.
- Subjects
Dewey Decimal Classification::500 | Naturwissenschaften::550 | Geowissenschaften ,lcsh:Applied optics. Photonics ,Earth observation ,Computer science ,Remote sensing application ,domain adaptation ,Context (language use) ,Machine learning ,computer.software_genre ,lcsh:Technology ,Domain (software engineering) ,remote sensing ,ddc:550 ,change detection ,Konferenzschrift ,Training set ,business.industry ,lcsh:T ,Deep learning ,lcsh:TA1501-1820 ,cycle-consistent generative adversarial networks ,Reference data ,lcsh:TA1-2040 ,Artificial intelligence ,Transfer of learning ,business ,lcsh:Engineering (General). Civil engineering (General) ,computer ,Change detection - Abstract
Deep learning classification models require large amounts of labeled training data to perform properly, but the production of reference data for most Earth observation applications is a labor intensive, costly process. In that sense, transfer learning is an option to mitigate the demand for labeled data. In many remote sensing applications, however, the accuracy of a deep learning-based classification model trained with a specific dataset drops significantly when it is tested on a different dataset, even after fine-tuning. In general, this behavior can be credited to the domain shift phenomenon. In remote sensing applications, domain shift can be associated with changes in the environmental conditions during the acquisition of new data, variations of objects' appearances, geographical variability and different sensor properties, among other aspects. In recent years, deep learning-based domain adaptation techniques have been used to alleviate the domain shift problem. Recent improvements in domain adaptation technology rely on techniques based on Generative Adversarial Networks (GANs), such as the Cycle-Consistent Generative Adversarial Network (CycleGAN), which adapts images across different domains by learning nonlinear mapping functions between the domains. In this work, we exploit the CycleGAN approach for domain adaptation in a particular change detection application, namely, deforestation detection in the Amazon forest. Experimental results indicate that the proposed approach is capable of alleviating the effects associated with domain shift in the context of the target application. © 2020 International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives.
- Published
- 2020
73. Creating multi-temporal maps of urban environments of improved localization of autonomous vehicles
- Author
-
Schachtschneider, Julia, Brenner, Claus, Paparoditis, N., Mallet, C., Lafarge, F., Remondino, Fabio, Toschi, Isabella, and Fuse, Takashi
- Subjects
Dewey Decimal Classification::500 | Naturwissenschaften::550 | Geowissenschaften ,LiDAR ,mobile mapping ,ddc:550 ,dynamic environments ,3D modelling ,localization ,Konferenzschrift - Abstract
The development of automated and autonomous vehicles requires highly accurate long-term maps of the environment. Urban areas contain a large number of dynamic objects which change over time. Since a permanent observation of the environment is impossible and there will always be a first time visit of an unknown or changed area, a map of an urban environment needs to model such dynamics. In this work, we use LiDAR point clouds from a large long term measurement campaign to investigate temporal changes. The data set was recorded along a 20 km route in Hannover, Germany with a Mobile Mapping System over a period of one year in bi-weekly measurements. The data set covers a variety of different urban objects and areas, weather conditions and seasons. Based on this data set, we show how scene and seasonal effects influence the measurement likelihood, and that multi-temporal maps lead to the best positioning results. © 2020 International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives.
- Published
- 2020
74. Investigations on skip-connections with an additional cosine similarity loss for land cover classification
- Author
-
Yang, Chuan, Rottensteiner, Franz, Heipke, Christian, Paparoditis, N., Mallet, C., and Lafarge, F.
- Subjects
lcsh:Applied optics. Photonics ,Dewey Decimal Classification::500 | Naturwissenschaften::550 | Geowissenschaften ,010504 meteorology & atmospheric sciences ,Computer science ,Feature vector ,Pooling ,0211 other engineering and technologies ,02 engineering and technology ,lcsh:Technology ,01 natural sciences ,Convolutional neural network ,Convolution ,land cover classification ,aerial imagery ,ddc:550 ,cosine similarity loss ,Konferenzschrift ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Pixel ,lcsh:T ,business.industry ,Cosine similarity ,lcsh:TA1501-1820 ,Pattern recognition ,skip-connections ,lcsh:TA1-2040 ,Feature (computer vision) ,Artificial intelligence ,lcsh:Engineering (General). Civil engineering (General) ,business ,Encoder ,CNN - Abstract
Pixel-based land cover classification of aerial images is a standard task in remote sensing, whose goal is to identify the physical material of the earth’s surface. Recently, most of the well-performing methods rely on encoder-decoder structure based convolutional neural networks (CNN). In the encoder part, many successive convolution and pooling operations are applied to obtain features at a lower spatial resolution, and in the decoder part these features are up-sampled gradually and layer by layer, in order to make predictions in the original spatial resolution. However, the loss of spatial resolution caused by pooling affects the final classification performance negatively, which is compensated by skip-connections between corresponding features in the encoder and the decoder. The most popular ways to combine features are element-wise addition of feature maps and 1x1 convolution. In this work, we investigate skip-connections. We argue that not every skip-connections are equally important. Therefore, we conducted experiments designed to find out which skip-connections are important. Moreover, we propose a new cosine similarity loss function to utilize the relationship of the features of the pixels belonging to the same category inside one mini-batch, i.e. these features should be close in feature space. Our experiments show that the new cosine similarity loss does help the classification. We evaluated our methods using the Vaihingen and Potsdam dataset of the ISPRS 2D semantic labelling challenge and achieved an overall accuracy of 91.1% for both test sites.
- Published
- 2020
- Full Text
- View/download PDF
75. Calibration and validation of Corona Kh-4B to generate height models and orthoimages
- Author
-
Jacobsen, Karsten, Paparoditis, N., Mallet, C., Lafarge, F., Hinz, S., Feitosa, R., Weinmann, M., and Jutzi, B.
- Subjects
Dewey Decimal Classification::500 | Naturwissenschaften::550 | Geowissenschaften ,lcsh:Applied optics. Photonics ,merge with TDM90 ,Calibration and validation ,010504 meteorology & atmospheric sciences ,0211 other engineering and technologies ,Base (geometry) ,Stereoscopy ,02 engineering and technology ,01 natural sciences ,Stability (probability) ,lcsh:Technology ,law.invention ,law ,Calibration ,ddc:550 ,Konferenzschrift ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Remote sensing ,validation ,lcsh:T ,Ground sample distance ,lcsh:TA1501-1820 ,Grid ,calibration ,Corona ,CORONA-KH-4B ,lcsh:TA1-2040 ,lcsh:Engineering (General). Civil engineering (General) ,Geology - Abstract
At 48 years of age CORONA-KH-4B images are important for actual urban planning in Bangladesh, where no old maps or aerial images exist, indicating locations of former water courses that are no longer visible, but causing problems for the stability of building ground. CORONA KH-4B images are available for very low cost or even free of charge. A forward and a backward-looking panoramic camera with a nadir angle of approximately 15° enabled a stereoscopic coverage. Taken from a height of 154km, an image covers approximately 220km × 14km up to 17km with a height to base relation of 1:1.85. The ground sampling distance (GSD) varies with the incidence angle from 1.77m in the centre of the image up to 2.18m in Y-direction and 2.69m in the X-direction at the ends of the images. These nice conditions of the old images are affected by geometric problems of the panoramic images. A geometric bending can be improved by a correction based on the available sidelines, nevertheless deformations dominantly in the longitudinal direction only can be determined based on ground control points. A group of neighboured images have similar deformations, allowing the determination of a correction grid, describing the systematic image errors. This improved the geometry strongly, but could not eliminate all individual and local geometric problems. The high morphologic quality of the CORONA images, but limited absolute height accuracy has been improved by merging with the highly accurate TDM90 height model. For ortho images a fitting of neighboured images was required. © 2020 Copernicus GmbH. All rights reserved.
- Published
- 2020
76. Improving deep learning based semantic segmentation with multi view outliner correction
- Author
-
Peters, Torben, Brenner, Claus, Song, M., Paparoditis, N., Mallet, C., Lafarge, F., Remondino, Fabio, Toschi, Isabella, and Fuse, Takashi
- Subjects
Dewey Decimal Classification::500 | Naturwissenschaften::550 | Geowissenschaften ,multi-view ,ddc:550 ,deep learning ,transfer learning ,MMS ,Konferenzschrift ,point cloud - Abstract
The goal of this paper is to use transfer learning for semi supervised semantic segmentation in 2D images: given a pretrained deep convolutional network (DCNN), our aim is to adapt it to a new camera-sensor system by enforcing predictions to be consistent for the same object in space. This is enabled by projecting 3D object points into multi-view 2D images. Since every 3D object point is usually mapped to a number of 2D images, each of which undergoes a pixelwise classification using the pretrained DCNN, we obtain a number of predictions (labels) for the same object point. This makes it possible to detect and correct outlier predictions. Ultimately, we retrain the DCNN on the corrected dataset in order to adapt the network to the new input data. We demonstrate the effectiveness of our approach on a mobile mapping dataset containing over 10'000 images and more than 1 billion 3D points. Moreover, we manually annotated a subset of the mobile mapping images and show that we were able to rise the mean intersection over union (mIoU) by approximately 10% with Deeplabv3+, using our approach. © 2020 International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives.
- Published
- 2020
77. TRAINING in INNOVATIVE TECHNOLOGIES for CLOSE-RANGE SENSING in ALPINE TERRAIN-3RD EDITION
- Author
-
Rutzinger, M., Anders, K., Bremer, M., Höfle, Bernhard, Lindenbergh, Roderik C., Oude Elberink, S., Pirotti, F., Scaioni, M., Zieher, T., Paparoditis, N., Mallet, C., Lafarge, F., Kumar, S., Raju, P.L.N., Aggarwal, S.P., Reyes, S.R., Ustuner, M., Tsai, F., Liesenberg, V., Department of Earth Observation Science, Faculty of Geo-Information Science and Earth Observation, and UT-I-ITC-ACQUAL
- Subjects
lcsh:Applied optics. Photonics ,010504 meteorology & atmospheric sciences ,vegetation mapping ,media_common.quotation_subject ,0211 other engineering and technologies ,Knowledge transfer ,Terrain ,02 engineering and technology ,01 natural sciences ,Training (civil) ,lcsh:Technology ,Presentation ,close-range photogrammetry ,glaciology ,Relevance (information retrieval) ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,media_common ,Medical education ,Teamwork ,mountain research ,Mountain research ,lcsh:T ,Questionnaire ,lcsh:TA1501-1820 ,thermography ,multi-temporal 3D point cloud analysis ,natural hazards ,lcsh:TA1-2040 ,PhD education summer school ,terrestrial laser scanning ,Psychology ,lcsh:Engineering (General). Civil engineering (General) - Abstract
The 3rd edition of the international summer school “Close-range Sensing Techniques in Alpine terrain” took place in Obergurgl, Austria, in June 2019. This article reports on results from the training and seminar activities and the outcome of student questionnaire survey. Comparison between the recent edition and the past edition in 2017 shows no significant differences on the level of satisfaction on organizational and training aspects. Gender balance was present both in candidates and in the outcome of selections. Selection was based on past research activities and on topic relevance. The majority of trainees were therefore doctoral candidates and postdoctoral researchers, but also motivated master students participated. The training took place through keynotes, lectures, seminars, in the field with hands-on surveys followed by data analysis in the lab, and teamwork for preparing a final team presentation over different assignments.
- Published
- 2020
78. Deep domain adaptation by weighted entropy minimization for the classification of aerial images
- Author
-
Wittich, Dennis, Paparoditis, N., Mallet, C., Lafarge, F., Remondino, F., Toschi, I., and Fuse, T.
- Subjects
lcsh:Applied optics. Photonics ,Dewey Decimal Classification::500 | Naturwissenschaften::550 | Geowissenschaften ,Domain adaptation ,Computer science ,domain adaptation ,0211 other engineering and technologies ,02 engineering and technology ,Convolutional neural network ,lcsh:Technology ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,ddc:550 ,fully convolutional networks ,Supervised training ,Konferenzschrift ,021101 geological & geomatics engineering ,business.industry ,lcsh:T ,lcsh:TA1501-1820 ,Pattern recognition ,aerial images ,Weighting ,classification ,lcsh:TA1-2040 ,Weighted entropy ,Minification ,Artificial intelligence ,business ,lcsh:Engineering (General). Civil engineering (General) ,Classifier (UML) ,Entropy minimization ,entropy minimization - Abstract
Fully convolutional neural networks (FCN) are successfully used for the automated pixel-wise classification of aerial images and possibly additional data. However, they require many labelled training samples to perform well. One approach addressing this issue is semi-supervised domain adaptation (SSDA). Here, labelled training samples from a source domain and unlabelled samples from a target domain are used jointly to obtain a target domain classifier, without requiring any labelled samples from the target domain. In this paper, a two-step approach for SSDA is proposed. The first step corresponds to a supervised training on the source domain, making use of strong data augmentation to increase the initial performance on the target domain. Secondly, the model is adapted by entropy minimization using a novel weighting strategy. The approach is evaluated on the basis of five domains, corresponding to five cities. Several training variants and adaptation scenarios are tested, indicating that proper data augmentation can already improve the initial target domain performance significantly resulting in an average overall accuracy of 77.5%. The weighted entropy minimization improves the overall accuracy on the target domains in 19 out of 20 scenarios on average by 1.8%. In all experiments a novel FCN architecture is used that yields results comparable to those of the best-performing models on the ISPRS labelling challenge while having an order of magnitude fewer parameters than commonly used FCNs.
- Published
- 2020
79. Integrity - A topic for photogrammetry?
- Author
-
Schön, Steffen, Paparoditis, N., Mallet, C., Lafarge, F., Hinz, Stefan, Feitosa, Raul, Weinheim, Martin, and Jutzi, Boris
- Subjects
lcsh:Applied optics. Photonics ,Dewey Decimal Classification::500 | Naturwissenschaften::550 | Geowissenschaften ,multi-sensor systems ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,lcsh:Technology ,01 natural sciences ,law.invention ,LIDAR ,law ,0502 economics and business ,ddc:550 ,Radar ,Reliability (statistics) ,Konferenzschrift ,050210 logistics & transportation ,Measure (data warehouse) ,GNSS ,lcsh:T ,010401 analytical chemistry ,05 social sciences ,lcsh:TA1501-1820 ,Civil aviation ,0104 chemical sciences ,images ,Lidar ,Photogrammetry ,lcsh:TA1-2040 ,GNSS applications ,Systems engineering ,integrity ,lcsh:Engineering (General). Civil engineering (General) - Abstract
Photogrammetric methods and sensors like LIDAR, RADAR and cameras are becoming more and more important for new applications like highly automatic driving, since they enable capturing relative information of the ego vehicle w.r.t its environment. Integrity measure the trust that we can put in the navigation information of a system. The concept of integrity was first developed for civil aviation and is linked to reliability concepts well known in geodesy and photogrammetry. Currently, the navigation community is discussing how to guarantee integrity for car navigation and multi-sensor systems. In this paper, we will give a short review on integrity concepts and on the current discussion of how to apply it to car navigation. We will discuss which role photogrammetry could play to solve the open issues in the integrity definition and monitoring for multi-sensor systems. © 2020 International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives.
- Published
- 2020
80. Assessing the semantic similarity of images of silk fabrics using convolutional neural networks
- Author
-
Clermont, D., Dorozynski, Mareike, Wittich, Dennis, Rottensteiner, Franz, Paparoditis, N., Mallet, C., Lafarge, F., Remondino, F., Toschi, I., and Fuse, T.
- Subjects
lcsh:Applied optics. Photonics ,Dewey Decimal Classification::500 | Naturwissenschaften::550 | Geowissenschaften ,0209 industrial biotechnology ,Computer science ,Context (language use) ,02 engineering and technology ,lcsh:Technology ,Convolutional neural network ,silk fabrics|iIncomplete training samples ,020901 industrial engineering & automation ,Semantic similarity ,Similarity (network science) ,Complete information ,convolutional neural networks ,0202 electrical engineering, electronic engineering, information engineering ,ddc:550 ,Konferenzschrift ,Training set ,lcsh:T ,business.industry ,lcsh:TA1501-1820 ,image similarity ,Pattern recognition ,Semantic property ,cultural heritage ,lcsh:TA1-2040 ,020201 artificial intelligence & image processing ,Artificial intelligence ,lcsh:Engineering (General). Civil engineering (General) ,business - Abstract
This paper proposes several methods for training a Convolutional Neural Network (CNN) for learning the similarity between images of silk fabrics based on multiple semantic properties of the fabrics. In the context of the EU H2020 project SILKNOW (http://silknow.eu/), two variants of training were developed, one based on a Siamese CNN and one based on a triplet architecture. We propose different definitions of similarity and different loss functions for both training strategies, some of them also allowing the use of incomplete information about the training data. We assess the quality of the trained model by using the learned image features in a k-NN classification. We achieve overall accuracies of 93–95% and average F1-scores of 87–92%.
- Published
- 2020
81. Deep learning based feature matching and its application in image orientation
- Author
-
Chen, Lie, Rottensteiner, Franz, Heipke, Christian, Paparoditis, N., Mallet, C., Lafarge, F., Remondino, F., Toschi, I., and Fuse, T.
- Subjects
Dewey Decimal Classification::500 | Naturwissenschaften::550 | Geowissenschaften ,descriptor learning ,feature orientation ,ddc:550 ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,image matching ,affine shape estimation ,image orientation ,Konferenzschrift - Abstract
Matching images containing large viewpoint and viewing direction changes, resulting in large perspective differences, still is a very challenging problem. Affine shape estimation, orientation assignment and feature description algorithms based on detected hand crafted features have shown to be error prone. In this paper, affine shape estimation, orientation assignment and description of local features is achieved through deep learning. Those three modules are trained based on loss functions optimizing the matching performance of input patch pairs. The trained descriptors are first evaluated on the Brown dataset (Brown et al., 2011), a standard descriptor performance benchmark. The whole pipeline is then tested on images of small blocks acquired with an aerial penta camera, to compute image orientation. The results show that learned features perform significantly better than alternatives based on hand crafted features. © 2020 Copernicus GmbH. All rights reserved.
- Published
- 2020
- Full Text
- View/download PDF
82. LR-CNN: Local-aware Region CNN for Vehicle Detection in Aerial Imagery
- Author
-
Liao, Wengtong, Chen, Xiang, Yang, Jingfeng, Roth, Stefan, Goesele, Michael, Ying Yang, Michael, Rosenhahn, Bodo, Paparoditis, N., Mallet, C., Lafarge, F., Remondino, F., Toschi, I., Fuse, T., Department of Earth Observation Science, UT-I-ITC-ACQUAL, and Faculty of Geo-Information Science and Earth Observation
- Subjects
Dewey Decimal Classification::500 | Naturwissenschaften::550 | Geowissenschaften ,lcsh:Applied optics. Photonics ,FOS: Computer and information sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Convolutional neural network ,lcsh:Technology ,ddc:550 ,0202 electrical engineering, electronic engineering, information engineering ,Quantization (image processing) ,Konferenzschrift ,0105 earth and related environmental sciences ,business.industry ,Orientation (computer vision) ,lcsh:T ,Deep learning ,deep learning ,lcsh:TA1501-1820 ,object detection ,Pattern recognition ,Object detection ,Feature (computer vision) ,lcsh:TA1-2040 ,vehicle detection ,twin region proposal ,feature enhancement ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Focus (optics) ,lcsh:Engineering (General). Civil engineering (General) ,Interpolation - Abstract
State-of-the-art object detection approaches such as Fast/Faster R-CNN, SSD, or YOLO have difficulties detecting dense, small targets with arbitrary orientation in large aerial images. The main reason is that using interpolation to align RoI features can result in a lack of accuracy or even loss of location information. We present the Local-aware Region Convolutional Neural Network (LR-CNN), a novel two-stage approach for vehicle detection in aerial imagery. We enhance translation invariance to detect dense vehicles and address the boundary quantization issue amongst dense vehicles by aggregating the high-precision RoIs' features. Moreover, we resample high-level semantic pooled features, making them regain location information from the features of a shallower convolutional block. This strengthens the local feature invariance for the resampled features and enables detecting vehicles in an arbitrary orientation. The local feature invariance enhances the learning ability of the focal loss function, and the focal loss further helps to focus on the hard examples. Taken together, our method better addresses the challenges of aerial imagery. We evaluate our approach on several challenging datasets (VEDAI, DOTA), demonstrating a significant improvement over state-of-the-art methods. We demonstrate the good generalization ability of our approach on the DLR 3K dataset., Comment: 8 pages
- Published
- 2020
- Full Text
- View/download PDF
83. Automatically generated training data for land cover classification with cnns using sentinel-2 images
- Author
-
Voelsen, M., Bostelmann, J., Maas, A., Rottensteiner, Franz, Heipke, Christian, Paparoditis, N., Mallet, C., and Lafarge, F.
- Subjects
lcsh:Applied optics. Photonics ,Dewey Decimal Classification::500 | Naturwissenschaften::550 | Geowissenschaften ,Computer science ,0211 other engineering and technologies ,02 engineering and technology ,Land cover ,lcsh:Technology ,Convolutional neural network ,remote sensing ,Data acquisition ,land cover ,0202 electrical engineering, electronic engineering, information engineering ,ddc:550 ,Satellite imagery ,Konferenzschrift ,021101 geological & geomatics engineering ,Training set ,lcsh:T ,business.industry ,Spatial database ,Deep learning ,sentinel-2 ,lcsh:TA1501-1820 ,deep learning ,Pattern recognition ,semantic segmentation ,lcsh:TA1-2040 ,020201 artificial intelligence & image processing ,Artificial intelligence ,lcsh:Engineering (General). Civil engineering (General) ,business ,Classifier (UML) ,Change detection ,CNN - Abstract
Pixel-wise classification of remote sensing imagery is highly interesting for tasks like land cover classification or change detection. The acquisition of large training data sets for these tasks is challenging, but necessary to obtain good results with deep learning algorithms such as convolutional neural networks (CNN). In this paper we present a method for the automatic generation of a large amount of training data by combining satellite imagery with reference data from an available geospatial database. Due to this combination of different data sources the resulting training data contain a certain amount of incorrect labels. We evaluate the influence of this so called label noise regarding the time difference between acquisition of the two data sources, the amount of training data and the class structure. We combine Sentinel-2 images with reference data from a geospatial database provided by the German Land Survey Office of Lower Saxony (LGLN). With different training sets we train a fully convolutional neural network (FCN) and classify four land cover classes (code Building, Agriculture, Forest, Water/code). Our results show that the errors in the training samples do not have a large influence on the resulting classifiers. This is probably due to the fact that the noise is randomly distributed and thus, neighbours of incorrect samples are predominantly correct. As expected, a larger amount of training data improves the results, especially for the less well represented classes. Other influences are different illuminations conditions and seasonal effects during data acquisition. To better adapt the classifier to these different conditions they should also be included in the training data. © 2020 International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives.
- Published
- 2020
84. PHOTOMATCH: AN OPEN-SOURCE MULTI-VIEW and MULTI-MODAL FEATURE MATCHING TOOL for PHOTOGRAMMETRIC APPLICATIONS
- Author
-
González-Aguilera, D., Ruiz De Oña, E., López-Fernandez, L., Farella, E. M., Stathopoulou, E. K., Toschi, I., Remondino, F., Rodríguez-Gonzálvez, P., Hernández-López, D., Fusiello, A., Nex, F., Paparoditis, N., Mallet, C., Lafarge, F., Kumar, S., Raju, P.L.N., Aggarwal, S.P., Reyes, S.R., Ustuner, M., Tsai, F., Liesenberg, V., Department of Earth Observation Science, UT-I-ITC-ACQUAL, and Faculty of Geo-Information Science and Earth Observation
- Subjects
lcsh:Applied optics. Photonics ,Matching (statistics) ,Open-source ,010504 meteorology & atmospheric sciences ,Computer science ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Scale-invariant feature transform ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,lcsh:Technology ,Descriptors ,Detectors ,Matching ,Photogrammetry ,Software development ,Tie points ,0202 electrical engineering, electronic engineering, information engineering ,0105 earth and related environmental sciences ,lcsh:T ,Perspective (graphical) ,3D reconstruction ,Detector ,lcsh:TA1501-1820 ,Pipeline (software) ,lcsh:TA1-2040 ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Data mining ,lcsh:Engineering (General). Civil engineering (General) ,computer - Abstract
Automatic feature matching is a crucial step in Structure-from-Motion (SfM) applications for 3D reconstruction purposes. From an historical perspective we can say now that SIFT was the enabling technology that made SfM a successful and fully automated pipeline. SIFT was the ancestor of a wealth of detector/descriptor methods that are now available. Various research activities have tried to benchmark detector/descriptors operators, but a clear outcome is difficult to be drawn. This paper presents an ISPRS Scientific Initiative aimed at providing the community with an educational open-source tool (called PhotoMatch) for tie point extractions and image matching. Several enhancement and decolorization methods can be initially applied to an image dataset in order to improve the successive feature extraction steps. Then different detector/descriptor combinations are possible, coupled with different matching strategies and quality control metrics. Examples and results show the implemented functionality of PhotoMatch which has also a tutorial for shortly explaining the implemented methods.
- Published
- 2020
85. Study of the impact of climate change on precipitation in Paris area using method based on iterative multiscale dynamic time warping (IMS-DTW)
- Author
-
Dilmi, Mohamed Djallel, Barth��s, Laurent, Mallet, C��cile, and Chazottes, Aymeric
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Physics - Atmospheric and Oceanic Physics ,Statistics - Machine Learning ,Atmospheric and Oceanic Physics (physics.ao-ph) ,FOS: Physical sciences ,Applications (stat.AP) ,Machine Learning (stat.ML) ,Statistics - Applications ,Machine Learning (cs.LG) - Abstract
Studying the impact of climate change on precipitation is constrained by finding a way to evaluate the evolution of precipitation variability over time. Classical approaches (feature-based) have shown their limitations for this issue due to the intermittent and irregular nature of precipitation. In this study, we present a novel variant of the Dynamic time warping method quantifying the dissimilarity between two rainfall time series based on shapes comparisons, for clustering annual time series recorded at daily scale. This shape based approach considers the whole information (variability, trends and intermittency). We further labeled each cluster using a feature-based approach. While testing the proposed approach on the time series of Paris Montsouris, we found that the precipitation variability increased over the years in Paris area.
- Published
- 2019
86. Shape Recognition
- Author
-
Lafarge, F., primary and Mallet, C., additional
- Published
- 2013
- Full Text
- View/download PDF
87. A pilot study to understand feasibility and acceptability of stool and cord blood sample collection for a large-scale longitudinal birth cohort
- Author
-
Bailey, SR, Townsend, CL, Dent, H, Mallet, C, Tsaliki, E, Riley, EM, Noursadeghi, M, Lawley, TD, Rodger, AJ, Brocklehurst, P, and Field, N
- Subjects
Adult ,Pilot Projects ,lcsh:Gynecology and obstetrics ,Specimen Handling ,Feces ,Biological samples ,Acceptability ,Pregnancy ,Preoperative Care ,Infant faeces ,Humans ,Pregnancy, Prolonged ,Longitudinal Studies ,lcsh:RG1-991 ,Blood Specimen Collection ,Bioarchive ,Cesarean Section ,Cord blood ,Feasibility ,Patient Acceptance of Health Care ,Fetal Blood ,United Kingdom ,Large-scale birth cohorts ,Feasibility Studies ,Female ,Maternal Serum Screening Tests ,Research Article - Abstract
Background Few data are available to guide biological sample collection around the time of birth for large-scale birth cohorts. We are designing a large UK birth cohort to investigate the role of infection and the developing immune system in determining future health and disease. We undertook a pilot to develop methodology for the main study, gain practical experience of collecting samples, and understand the acceptability of sample collection to women in late pregnancy. Methods Between February–July 2014, we piloted the feasibility and acceptability of collecting maternal stool, baby stool and cord blood samples from participants recruited at prolonged pregnancy and planned pre-labour caesarean section clinics at University College London Hospital. Participating women were asked to complete acceptability questionnaires. Results Overall, 265 women were approached and 171 (65%) participated, with ≥1 sample collected from 113 women or their baby (66%). Women had a mean age of 34 years, were primarily of white ethnicity (130/166, 78%), and half were nulliparous (86/169, 51%). Women undergoing planned pre-labour caesarean section were more likely than those who delivered vaginally to provide ≥1 sample (98% vs 54%), but less likely to provide maternal stool (10% vs 43%). Pre-sample questionnaires were completed by 110/171 women (64%). Most women reported feeling comfortable with samples being collected from their baby (
- Published
- 2017
88. Regulation of β-and α-glycolytic activities in the sediments of a eutrophic lake
- Author
-
Mallet, C. and Debroas, D.
- Published
- 2001
- Full Text
- View/download PDF
89. PREFACE: THE 2020 EDITION OF THE XXIVTH ISPRS CONGRESS
- Author
-
Mallet, C., primary, Lafarge, F., additional, Poreba, M., additional, Rupnik, E., additional, Bahl, G., additional, Girard, N., additional, Garioud, A., additional, Dowman, I., additional, and Paparoditis, N., additional
- Published
- 2020
- Full Text
- View/download PDF
90. CLASSIFICATION OF TIME SERIES OF SENTINEL-2 IMAGES FOR LARGE SCALE MAPPING IN CAMEROON
- Author
-
Tagne, H., primary, Le Bris, A., additional, Monkam, D., additional, and Mallet, C., additional
- Published
- 2020
- Full Text
- View/download PDF
91. ON THE JOINT EXPLOITATION OF OPTICAL AND SAR SATELLITE IMAGERY FOR GRASSLAND MONITORING
- Author
-
Garioud, A., primary, Valero, S., additional, Giordano, S., additional, and Mallet, C., additional
- Published
- 2020
- Full Text
- View/download PDF
92. CURRENT CHALLENGES IN OPERATIONAL VERY HIGH RESOLUTION LAND-COVER MAPPING
- Author
-
Mallet, C., primary and Le Bris, A., additional
- Published
- 2020
- Full Text
- View/download PDF
93. SEMCITY TOULOUSE: A BENCHMARK FOR BUILDING INSTANCE SEGMENTATION IN SATELLITE IMAGES
- Author
-
Roscher, R., primary, Volpi, M., additional, Mallet, C., additional, Drees, L., additional, and Wegner, J. D., additional
- Published
- 2020
- Full Text
- View/download PDF
94. CNN SEMANTIC SEGMENTATION TO RETRIEVE PAST LAND COVER OUT OF HISTORICAL ORTHOIMAGES AND DSM: FIRST EXPERIMENTS
- Author
-
Le Bris, A., primary, Giordano, S., additional, and Mallet, C., additional
- Published
- 2020
- Full Text
- View/download PDF
95. Application and implementation of the RESPONSE methodology to evaluate the impact of climate change on the Aquitaine and Languedoc Roussillon coastlines
- Author
-
Mallet, C, primary, Balouin, Y, additional, Closset, L, additional, Garcin, M, additional, Idier, D, additional, Vinchon, C, additional, and Aubié, S, additional
- Published
- 2007
- Full Text
- View/download PDF
96. Changement climatique et risques littoraux : apports scientifiques pour une adaptation durable et juste
- Author
-
Nicolas ROCLE, Mallet, C., Castelle, B., Chaumillon, E., Environnement, territoires et infrastructures (UR ETBX), Institut national de recherche en sciences et technologies pour l'environnement et l'agriculture (IRSTEA), Bureau de Recherches Géologiques et Minières (BRGM) (BRGM), Environnements et Paléoenvironnements OCéaniques (EPOC), Observatoire aquitain des sciences de l'univers (OASU), Université Sciences et Technologies - Bordeaux 1-Institut national des sciences de l'Univers (INSU - CNRS)-Centre National de la Recherche Scientifique (CNRS)-Université Sciences et Technologies - Bordeaux 1-Institut national des sciences de l'Univers (INSU - CNRS)-Centre National de la Recherche Scientifique (CNRS)-École pratique des hautes études (EPHE), Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Centre National de la Recherche Scientifique (CNRS), LIttoral ENvironnement et Sociétés - UMRi 7266 (LIENSs), Université de La Rochelle (ULR)-Centre National de la Recherche Scientifique (CNRS), (partenariat avec la sphère publique (sans AO)), irstea, and Région Nouvelle-Aquitaine
- Subjects
[SDE]Environmental Sciences ,SEA LEVEL RISE ,FAIR ADAPTATION ,COASTAL RISKS - Abstract
This contribution aims to be a summary and a presentation of knowledge from research conducted by the laboratories of Nouvelle-Aquitaine on the physical risks of coastal areas, their developments as a result of climate change and socio-economic and political dynamics at work with regard to the measures and strategies for adaptation. It is the result of collaboration between approximately 30 researchers from the following laboratories: BRGM, Criham (Université de Poitiers), EPOC (UMR Université de Bordeaux - CNRS), ETBX (Irstea), GREThA (UMR Université de Bordeaux - CNRS), LIENSs (UMR Université de La Rochelle - CNRS), ONF, SIAME (Université de Pau et des Pays de l'Adour).; La présente contribution se veut une synthèse et un porter à connaissance des recherches menées par les laboratoires de Nouvelle-Aquitaine sur les risques physiques en zones côtières, leurs évolutions sous l'effet du changement climatique, ainsi que les dynamiques socioéconomiques et politiques à l'oeuvre concernant les mesures et stratégies d'adaptation. Elle résulte d'une collaboration d'environ 30 chercheur.e.s issu.e.s des laboratoires suivants : BRGM, Criham (Université de Poitiers), EPOC (UMR Université de Bordeaux - CNRS), ETBX (Irstea), GREThA (UMR Université de Bordeaux - CNRS), LIENSs (UMR Université de La Rochelle - CNRS), ONF, SIAME (Université de Pau et des Pays de l'Adour).
- Published
- 2019
97. QGIS et outils génériques
- Author
-
Baghdadi, N., Mallet, C., Zribi, Mehrez, Territoires, Environnement, Télédétection et Information Spatiale (UMR TETIS), Centre de Coopération Internationale en Recherche Agronomique pour le Développement (Cirad)-AgroParisTech-Institut national de recherche en sciences et technologies pour l'environnement et l'agriculture (IRSTEA)-Centre National de la Recherche Scientifique (CNRS), Centre d'études spatiales de la biosphère (CESBIO), Institut de Recherche pour le Développement (IRD)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Observatoire Midi-Pyrénées (OMP), Université Fédérale Toulouse Midi-Pyrénées-Centre National d'Études Spatiales [Toulouse] (CNES)-Centre National de la Recherche Scientifique (CNRS), ISTE, Université Paris-Est Marne-la-Vallée (UPEM), Centre National d'Études Spatiales [Toulouse] (CNES)-Observatoire Midi-Pyrénées (OMP), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement (INRAE)-Institut de Recherche pour le Développement (IRD)-Centre National de la Recherche Scientifique (CNRS), Institut national de recherche en sciences et technologies pour l'environnement et l'agriculture (IRSTEA)-Centre de Coopération Internationale en Recherche Agronomique pour le Développement (Cirad)-AgroParisTech-Centre National de la Recherche Scientifique (CNRS), Institut de Recherche pour le Développement (IRD)-Centre National de la Recherche Scientifique (CNRS)-Université Fédérale Toulouse Midi-Pyrénées-Centre National d'Études Spatiales [Toulouse] (CNES)-Météo France-Institut de Recherche pour le Développement (IRD)-Centre National de la Recherche Scientifique (CNRS)-Université Fédérale Toulouse Midi-Pyrénées-Météo France-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement (INRAE)-Centre National de la Recherche Scientifique (CNRS), Laboratoire des Sciences et Technologies de l'Information Géographique (LaSTIG), École nationale des sciences géographiques (ENSG), Institut National de l'Information Géographique et Forestière [IGN] (IGN)-Institut National de l'Information Géographique et Forestière [IGN] (IGN), Centre National de la Recherche Scientifique (CNRS)-Institut de Recherche pour le Développement (IRD)-Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement (INRAE)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Institut national des sciences de l'Univers (INSU - CNRS)-Observatoire Midi-Pyrénées (OMP), and Météo France-Centre National d'Études Spatiales [Toulouse] (CNES)-Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut de Recherche pour le Développement (IRD)-Météo France-Centre National d'Études Spatiales [Toulouse] (CNES)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
[SDE]Environmental Sciences ,[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,[SDU.ENVI]Sciences of the Universe [physics]/Continental interfaces, environment ,ComputingMilieux_MISCELLANEOUS - Abstract
International audience; Cet ouvrage débute la série Utilisation de QGIS en télédétection qui vise à faciliter l'appropriation et l'utilisation opérationnelle du système d'information géographique (SIG) QGIS (Quantum Geographic Information System) dans le domaine de la télédétection. Ce volume définit le principe de fonctionnement de QGIS et des librairies fondamentales les plus fréquemment utilisées en traitement d'images et en géomatique : GDAL, GRASS, SAGA et OTB. Il présente ainsi de nombreuses fonctionnalités qui seront mises en oeuvre dans de nombreux cas pratiques de télédétection et en analyse spatiale.
- Published
- 2018
98. Grain-boundary embrittlement in highly irradiated 15-15Ti austenitic steel
- Author
-
Courcelle, A., Jublot, M., Bisor, C., Séran, J.L., Turque, I., Rabouille, O., Azera, M., Mallet, C., Verhaeghe, B., Desserouer, C., Cloute-Cazalaa, V., Grange, P., Pegaitaz, Y., Perez, T., Schildknecht, D., Service d'Etudes des Matériaux Irradiés (SEMI), Département des Matériaux pour le Nucléaire (DMN), CEA-Direction des Energies (ex-Direction de l'Energie Nucléaire) (CEA-DES (ex-DEN)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Université Paris-Saclay-CEA-Direction des Energies (ex-Direction de l'Energie Nucléaire) (CEA-DES (ex-DEN)), and Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Université Paris-Saclay
- Subjects
irradiation ,[PHYS.NUCL]Physics [physics]/Nuclear Theory [nucl-th] ,austenitic ,TEM ,Astrid ,[PHYS.NEXP]Physics [physics]/Nuclear Experiment [nucl-ex] ,15-15Ti ,ComputingMilieux_MISCELLANEOUS - Abstract
International audience
- Published
- 2018
99. An investigation of the gas flow mechanics in longwall goafs
- Author
-
Balusu, R, primary, Xue, S, additional, Wendt, M, additional, Mallet, C, additional, Robertson, S, additional, Holland, R, additional, Moreby, R, additional, Mclean, D, additional, and Deguchi, G, additional
- Published
- 2002
- Full Text
- View/download PDF
100. NETWORK DETECTION IN RASTER DATA USING MARKED POINT PROCESSES
- Author
-
Schmidt, Alena, Kruse, Christian, Rottensteiner, Franz, Sörgel, Uwe, Heipke, Christian, L. Halounova, L., Schindler, K., Limpouch, A., Pajdla, T., Šafář, V., Mayer, H., Oude Elberink, S., Mallet, C., Rottensteiner, F., Brédif, M., Skaloud, J., and Stilla, U.
- Subjects
lcsh:Applied optics. Photonics ,Theoretical computer science ,Computer science ,Stochastic modelling ,Digital terrain models ,0211 other engineering and technologies ,Markov process ,02 engineering and technology ,lcsh:Technology ,Graph ,Raster data ,symbols.namesake ,Line segment ,RJMCMC ,0202 electrical engineering, electronic engineering, information engineering ,Networks (circuits) ,Random geometric graph ,Konferenzschrift ,Dewey Decimal Classification::500 | Naturwissenschaften ,021101 geological & geomatics engineering ,Probabilistic framework ,Marked point process ,Landforms ,Stochastic systems ,lcsh:T ,Markov processes ,lcsh:TA1501-1820 ,Function (mathematics) ,Reversible-jump Markov chain Monte Carlo ,Remote sensing ,Most probable configurations ,Dewey Decimal Classification::500 | Naturwissenschaften::520 | Astronomie, Kartographie ,Stochastic models ,Digital terrain model ,lcsh:TA1-2040 ,symbols ,ddc:520 ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Marked point processes ,ddc:500 ,Networks ,lcsh:Engineering (General). Civil engineering (General) ,Algorithm ,Reversible jump Markov chain Monte Carlo - Abstract
We propose a new approach for the automatic detection of network structures in raster data. The model for the network structure is represented by a graph whose nodes and edges correspond to junction-points and to connecting line segments, respectively; nodes and edges are further described by certain parameters. We embed this model in the probabilistic framework of marked point processes and determine the most probable configuration of objects by stochastic sampling. That is, different graph configurations are constructed randomly by modifying the graph entity parameters, by adding and removing nodes and edges to/ from the current graph configuration. Each configuration is then evaluated based on the probabilities of the changes and an energy function describing the conformity with a predefined model. By using the Reversible Jump Markov Chain Monte Carlo sampler, a global optimum of the energy function is determined. We apply our method to the detection of river and tidal channel networks in digital terrain models. In comparison to our previous work, we introduce constraints concerning the flow direction of water into the energy function. Our goal is to analyse the influence of different parameter settings on the results of network detection in both, synthetic and real data. Our results show the general potential of our method for the detection of river networks in different types of terrain.
- Published
- 2016
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.