1. RESAMPLING METHODS FOR A RELIABLE VALIDATION SET IN DEEP LEARNING BASED POINT CLOUD CLASSIFICATION.
- Author
-
Nurunnabi, A. and Teferle, F. N.
- Subjects
POINT cloud ,DEEP learning ,CLASSIFICATION ,MACHINE learning ,STATISTICAL sampling - Abstract
A validation data set plays a pivotal role in tweaking a machine learning model trained in a supervised manner. Many existing algorithms select a part of available data by using random sampling to produce a validation set. However, this approach can be prone to overfitting. One should follow careful data splitting to have reliable training and validation sets that can produce a generalized model with a good performance for the unseen (test) data. Data splitting based on resampling techniques involves repeatedly drawing samples from the available data. Hence, resampling methods can give better generalization power to a model, because they can produce and use many training and/or validation sets. These techniques are computationally expensive, but with increasingly available high-performance computing facilities, one can exploit them. Though a multitude of resampling methods exist, investigation of their influence on the generality of deep learning (DL) algorithms is limited due to its non-linear black-box nature. This paper contributes by: (1) investigating the generalization capability of the four most popular resampling methods: k-fold cross-validation (k-CV), repeated k-CV (Rk-CV), Monte Carlo CV (MC-CV) and bootstrap for creating training and validation data sets used for developing, training and validating DL based point cloud classifiers (e.g., PointNet; Qi et al., 2017a), (2) justifying Mean Square Error (MSE) as a statistically consistent estimator, and (3) exploring the use of MSE as a reliable performance metric for supervised DL. Experiments in this paper are performed on both synthetic and real-world aerial laser scanning (ALS) point clouds. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF