Back to Search Start Over

Dual-stage color calibration of UAV imagery using multivariate regression and deep learning.

Authors :
Abdalla, Alwaseela
Karn, Rupak
Adedeji, Oluwatola
Guo, Wenxuan
Source :
Computers & Electronics in Agriculture. Sep2024, Vol. 224, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

• Advanced dual-phase approach combining MLR and deep learning for color calibration. • Color matrix derived from four boards for creating high-quality ground truths. • Developed im2im regression CNN focusing on learning image residuals. • Enhanced efficiency by eliminating the need for physical color boards in the image. • Deep learning model outperforms existing methods with 1.52% color error. Accurate color representation in UAV-based imagery is critical in high-throughput plant phenotyping and precision agriculture for monitoring crop health and identifying nutrient deficiencies or water stress. However, achieving color accuracy is challenging due to varying lighting conditions and flight altitudes in UAV operations. To overcome these challenges, we propose an automated dual-phase approach for consistent color calibration in UAV imagery, integrating traditional color calibration methods with cutting-edge deep learning algorithms. In the first phase, we derived a color calibration matrix (M) for four color boards placed within the imaging area. This involved mapping the observed color values in the UAV images to their true color values, as measured by a colorimeter, using multivariate linear regression (MLR). The individual matrices from each board were then merged into a single, weighted average matrix, which was then applied to the UAV images for color correction, resulting in standardized color outputs. In the second phase, we developed a deep learning model, trained on the MLR-corrected images as a ground truth. This model assigns varying weights for different pixels across the image scene, translating the image from a distorted to a color-corrected image using image-to-image regression without manual intervention. We validated this approach using 184 UAV images of various sizes, ranging from 3969 × 2589 to 15257 × 15098 pixels, captured under a wide array of lighting conditions and altitudes. The results show that our deep learning model significantly outperforms existing methods, with a marginal color error of 1.52 %. This two-phase color correction method greatly improves operational efficiency by removing the necessity for physical color boards in each image and allowing for automated processing after training. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01681699
Volume :
224
Database :
Academic Search Index
Journal :
Computers & Electronics in Agriculture
Publication Type :
Academic Journal
Accession number :
178938736
Full Text :
https://doi.org/10.1016/j.compag.2024.109170