191 results on '"pansharpening"'
Search Results
2. A Spectral Enhancement Method Based on Remote-Sensing Images for High-Speed Railways
- Author
-
Guo, Dongsheng Zuo, Yingjie Li, Su Qiu, Weiqi Jin, and Hong
- Subjects
remote sensing ,pansharpening ,high-speed rail ,panchromatic images ,multispectral images - Abstract
This paper proposes a pansharpening model in order to obtain remote-sensing images with high spatial resolution and high spectral resolution. Based on a generic component substitution (CS) fusion framework, the model utilizes the difference between the high-frequency component of the panchromatic (PAN) image and the high-frequency component of the luminance (L) image to express the missing spatial detail information of the ideal high-resolution multispectral (HRMS) image. A rolling guidance filter (RGF) is used in this framework to achieve the effective extraction of high-frequency information from remote-sensing images while reducing the spectral distortion of subsequent operations. The modulation transfer function (MTF) values of the sensor are also applied to the selection of adaptive weighting coefficients to further improve the spectral fidelity of the fused images. At the same time, the choice of suitable interpolation and gain coefficients improves the generalizability of the model while reducing spectral and spatial distortions. Finally, the use of a guided filter (GF) also greatly improves the quality of the fused image. The experimental results show that the model can effectively improve the spatial resolution for foreign objects at the perimeter of high-speed railways, while also ensuring the color fidelity of foreign objects such as colored steel tiles.
- Published
- 2023
- Full Text
- View/download PDF
3. Improved Generalized IHS Based on Total Variation for Pansharpening
- Author
-
Jin, Xuefeng Zhang, Xiaobing Dai, Xuemin Zhang, Yuchen Hu, Yingdong Kang, and Guang
- Subjects
pansharpening ,GIHS ,total variation - Abstract
Pansharpening refers to the fusion of a panchromatic (PAN) and a multispectral (MS) image aimed at generating a high-quality outcome over the same area. This particular image fusion problem has been widely studied, but until recently, it has been challenging to balance the spatial and spectral fidelity in fused images. The spectral distortion is widespread in the component substitution-based approaches due to the variation in the intensity distribution of spatial components. We lightened the idea using the total variation optimization to improve upon a novel GIHS-TV framework for pansharpening. The framework drew the high spatial fidelity from the GIHS scheme and implemented it with a simpler variational expression. An improved L1-TV constraint to the new spatial–spectral information was introduced to the GIHS-TV framework, along with its fast implementation. The objective function was solved by the Iteratively Reweighted Norm (IRN) method. The experimental results on the “PAirMax” dataset clearly indicated that GIHS-TV could effectively reduce the spectral distortion in the process of component substitution. Our method has achieved excellent results in visual effects and evaluation metrics.
- Published
- 2023
- Full Text
- View/download PDF
4. Multi-Scale and Multi-Stream Fusion Network for Pansharpening
- Author
-
Lihua Jian, Shaowu Wu, Lihui Chen, Gemine Vivone, Rakiba Rayhana, and Di Zhang
- Subjects
General Earth and Planetary Sciences ,pansharpening ,multi-scale ,multi-stream fusion ,multi-stage reconstruction loss ,image enhancement ,image fusion - Abstract
Pansharpening refers to the use of a panchromatic image to improve the spatial resolution of a multi-spectral image while preserving spectral signatures. However, existing pansharpening methods are still unsatisfactory at balancing the trade-off between spatial enhancement and spectral fidelity. In this paper, a multi-scale and multi-stream fusion network (named MMFN) that leverages the multi-scale information of the source images is proposed. The proposed architecture is simple, yet effective, and can fully extract various spatial/spectral features at different levels. A multi-stage reconstruction loss was adopted to recover the pansharpened images in each multi-stream fusion block, which facilitates and stabilizes the training process. The qualitative and quantitative assessment on three real remote sensing datasets (i.e., QuickBird, Pléiades, and WorldView-2) demonstrates that the proposed approach outperforms state-of-the-art methods.
- Published
- 2023
- Full Text
- View/download PDF
5. An Optimization Procedure for Robust Regression-Based Pansharpening
- Author
-
Marco Carpentiero, Gemine Vivone, Rocco Restaino, Paolo Addesso, Jocelyn Chanussot, University of Salerno (UNISA), Institute of Methodologies for Environmental Analysis of the National Research Council (IMAA), National Research Council of Italy | Consiglio Nazionale delle Ricerche (CNR), GIPSA - Signal Images Physique (GIPSA-SIGMAPHY), GIPSA Pôle Sciences des Données (GIPSA-PSD), Grenoble Images Parole Signal Automatique (GIPSA-lab), Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA)-Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Grenoble Alpes (UGA), Apprentissage de modèles à partir de données massives (Thoth), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire Jean Kuntzmann (LJK), Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), and ANR-19-P3IA-0003,MIAI,MIAI @ Grenoble Alpes(2019)
- Subjects
Optimization ,regression-based data fusion ,Sensors ,pansharpening ,Maximum likelihood estimation ,Pansharpening ,Estimation ,Laplace equations ,Electronic mail ,Image fusion ,pyramidal decomposition ,remote sensing ,General Earth and Planetary Sciences ,Electrical and Electronic Engineering ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing - Abstract
International audience; Model-based approaches to pansharpening still constitute a class of widely employed methods, thanks to their straightforward applicability to many problems, dispensing the user from time-consuming training phases. The injection scheme based on an accurate estimation (exploiting regression) of the relationship between the details contained in the panchromatic (PAN) image and those required for the enhancement of the multispectral (MS) image represents the most updated approach to this problem, being characterized by both theoretical and practical optimality. We elaborated on this scheme by designing a procedure for estimating the key parameters required for the optimal setting of such a regression-based approach. We tested this approach on several datasets acquired by the WorldView satellites comparing the proposed approach with a benchmark consisting of some state-of-the-art pansharpening methods.
- Published
- 2022
6. Siamese Networks Based Deep Fusion Framework for Multi-Source Satellite Imagery
- Author
-
Hannan Adeel, Javaria Tahir, M. Mohsin Riaz, and Syed Sohaib Ali
- Subjects
remote sensing ,General Computer Science ,depth-of-field ,General Engineering ,General Materials Science ,Pansharpening ,Electrical engineering. Electronics. Nuclear engineering ,siamese networks ,image fusion ,deep-learning ,TK1-9971 - Abstract
A critical aim of pansharpening is to fuse coherent spatial and spectral features from panchromatic and multispectral images respectively. This study proposes deep siamese network based pansharpening model as a two-stage framework in a multiscale setting. In the first stage, a siamese network learns a common feature space between panchromatic and multispectral bands. The second stage follows by fusing the output feature maps of the siamese network. The parameters of these two stages are shared across scales in order to add spatial information consistently (across scales). The spectral information is preserved by adding appropriate skip connections from input multispectral image. Multi-level network parameters sharing mechanism in pyramidal reconstruction of pansharpened image, better preserves spatial and spectral details simultaneously. Experimental work carried out using deep siamese network in multi-scale setting (to obtain inter-band similarity among different sensor data) outperforms several latest pansharpening methods.
- Published
- 2022
7. Hyperspectral Pansharpening With Adaptive Feature Modulation-Based Detail Injection Network
- Author
-
Yunsong Li, Yuxuan Zheng, Jiaojiao Li, Rui Song, Jocelyn Chanussot, Xidian University, University College of London [London] (UCL), GIPSA - Signal Images Physique (GIPSA-SIGMAPHY), GIPSA Pôle Sciences des Données (GIPSA-PSD), Grenoble Images Parole Signal Automatique (GIPSA-lab), Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA)-Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Grenoble Alpes (UGA), Apprentissage de modèles à partir de données massives (Thoth), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire Jean Kuntzmann (LJK), Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), and ANR-19-P3IA-0003,MIAI,MIAI @ Grenoble Alpes(2019)
- Subjects
Modulation ,Spatial resolution ,Image resolution ,[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,Feature extraction ,General Earth and Planetary Sciences ,Pansharpening ,Electrical and Electronic Engineering ,Data mining ,Convolution - Abstract
International audience; Recently, deep learning-based methodologies have attained unprecedented performance in hyperspectral (HS) pansharpening, which aims to improve the spatial quality of HS images (HSIs) by making use of details extracted from the high-resolution panchromatic (HR-PAN) image. However, it remains challenging to incorporate the details into the pansharpened image effectively, while alleviating the spectral distortion simultaneously. To tackle this problem, in this article, we propose an adaptive feature modulation-based detail injection network (AFM-DIN) for HS pansharpening, which mainly consists of four phases: high-frequency details generation of the HR-PAN image, multiscale feature extraction of the upsampled HSI, AFM-based detail injection, and reconstruction of the HR-HSI. First, a novel octave convolution unit is employed to decompose the HR-PAN image into high and low frequencies, and then merge the high-frequency features together to generate the comprehensive PAN-details. Second, the spatial and spectral separable 3-D convolution units with multiple kernel sizes are designed to extract multiscale features of the upsampled HSI in a computationally efficient manner. Subsequently, by taking the critical PAN-details as prior, the proposed AFM module is able to not only incorporate the detail information effectively, but also adjust the injected details adaptively to ensure the spectral fidelity. Finally, the anticipated HR-HSI is obtained through adding the upsampled HSI to the predicted HSI-details reconstructed from informative modulated features. Extensive comparison experiments with several state-of-the-arts conducted on simulated and real HS data sets demonstrate that our proposed AFM-DIN can achieve superior pansharpening accuracy in both spatial and spectral aspects.
- Published
- 2022
8. CNN-Based Hyperspectral Pansharpening With Arbitrary Resolution
- Author
-
Lin He, Jiawei Zhu, Jun Li, Antonio Plaza, Jocelyn Chanussot, Zhuliang Yu, South China University of Technology [Guangzhou] (SCUT), Sun Yat-Sen University [Guangzhou] (SYSU), Universidad de Extremadura - University of Extremadura (UEX), GIPSA - Signal Images Physique (GIPSA-SIGMAPHY), GIPSA Pôle Sciences des Données (GIPSA-PSD), Grenoble Images Parole Signal Automatique (GIPSA-lab), Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA)-Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Grenoble Alpes (UGA), Apprentissage de modèles à partir de données massives (Thoth), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire Jean Kuntzmann (LJK), Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Laboratoire Jean Kuntzmann (LJK), and ANR-19-P3IA-0003,MIAI,MIAI @ Grenoble Alpes(2019)
- Subjects
Spatial resolution ,Hyperspectral imaging ,Task analysis ,Image reconstruction ,Training ,General Earth and Planetary Sciences ,Convolutional neural networks ,Pansharpening ,Electrical and Electronic Engineering ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing - Abstract
International audience; Traditional hyperspectral (HS) pansharpening aims at fusing a HS image with its panchromatic (PAN) counterpart, to bring the spatial resolution of the HS image to that of the PAN image. However, in many practical applications, arbitrary resolution HS (ARHS) pansharpening is required, where the HS and PAN images need to be integrated to generate a pansharpened HS image with arbitrary resolution (usually higher than that of the PAN image). Such an innovative task brings forth new challenges for the pansharpening technique, mainly including how to reconstruct HS images beyond the training scale and how to guarantee spectral fidelity at any spatial resolutions. To tackle the challenges, we present a novel convolutional neural network (CNN)-based method for ARHS pansharpening called ARHS-CNN. It is based on a two-step relay optimization process, which is associated with a multilevel enhancement subnetwork and a rescaling subnetwork. With a careful design following the thread, our ARHS-CNN is able to pansharpen HS images to any spatial resolutions using just a single CNN model trained on a limited number of scales while meantime to keep spectral fidelity at those resolutions, which wins an obvious advantage over traditional pansharpening methods. Experimental results obtained on several datasets verify the excellent performance of our ARHS-CNN method.
- Published
- 2022
9. ArbRPN: A Bidirectional Recurrent Pansharpening Network for Multispectral Images With Arbitrary Numbers of Bands
- Author
-
Xiaomin Yang, Zhibing Lai, Gemine Vivone, Gwanggil Jeon, Jocelyn Chanussot, Lihui Chen, Sichuan University [Chengdu] (SCU), Institute of Methodologies for Environmental Analysis of the National Research Council (IMAA), National Research Council of Italy | Consiglio Nazionale delle Ricerche (CNR), Incheon National University, Chinese Academy of Sciences [Beijing] (CAS), GIPSA - Signal Images Physique (GIPSA-SIGMAPHY), GIPSA Pôle Sciences des Données (GIPSA-PSD), Grenoble Images Parole Signal Automatique (GIPSA-lab), Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA)-Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Grenoble Alpes (UGA), Apprentissage de modèles à partir de données massives (Thoth), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire Jean Kuntzmann (LJK), Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), and ANR-19-P3IA-0003,MIAI,MIAI @ Grenoble Alpes(2019)
- Subjects
business.industry ,Computer science ,Multispectral image ,pansharpening ,Pattern recognition ,multispec- tral (MS) images ,image fusion ,remote sensing ,[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,Deep learning (DL) ,recurrent neural networks (RNNs) ,General Earth and Planetary Sciences ,Artificial intelligence ,Electrical and Electronic Engineering ,business - Abstract
International audience; Although the performance of pansharpening has been significantly improved by advanced deep-learning (DL) technologies in recent years, most DL-based methods fail to process multispectral (MS) images with arbitrary numbers of bands by a single model. Consequently, it is inevitable to train separate models for MS images with different numbers of bands, which is time- and storage-consuming as well as inefficient in practice. To tackle the above problem, we propose a bidirectional recurrent pansharpening network (named ArbRPN) for MS images with arbitrary numbers of bands. Our ArbRPN can dynamically reconstruct high-resolution (HR) MS images with different numbers of bands by adaptively changing the number of recurrence to the number of bands of the low-resolution (LR) MS images. Leveraging on the ability of the ArbRPN to process MS images with any number of bands, one can even customize the bands to be pansharpened. Moreover, to achieve superior performance, spectral discrepancy and dependence are considered in the ArbRPN. Details from the panchromatic (PAN) image are adaptively injected into the fused product according to the captured spectral dependence. Furthermore, training strategies of existing DL-based pansharpening methods can only group MS images with a constant number of bands into mini-batches. Therefore, we present a mask-based training method (called mask-training) to solve this problem. Benefiting from the mask-training, our ArbRPN can achieve superior performance and robustness during pansharpening. Extensive experiments show the superior performance of our ArbRPN with respect to the state-of-the-art (SOTA) methods applied to MS images with different numbers of bands. The code of our ArbRPN is available on https://github.com/Lihui-Chen/ArbRPN.git.
- Published
- 2022
10. Fast Full-Resolution Target-Adaptive CNN-Based Pansharpening Framework
- Author
-
Matteo Ciotola and Giuseppe Scarpa
- Subjects
data fusion ,multiresolution analysis ,super-resolution ,pansharpening ,General Earth and Planetary Sciences - Abstract
In the last few years, there has been a renewed interest in data fusion techniques, and, in particular, in pansharpening due to a paradigm shift from model-based to data-driven approaches, supported by the recent advances in deep learning. Although a plethora of convolutional neural networks (CNN) for pansharpening have been devised, some fundamental issues still wait for answers. Among these, cross-scale and cross-datasets generalization capabilities are probably the most urgent ones since most of the current networks are trained at a different scale (reduced-resolution), and, in general, they are well-fitted on some datasets but fail on others. A recent attempt to address both these issues leverages on a target-adaptive inference scheme operating with a suitable full-resolution loss. On the downside, such an approach pays an additional computational overhead due to the adaptation phase. In this work, we propose a variant of this method with an effective target-adaptation scheme that allows for the reduction in inference time by a factor of ten, on average, without accuracy loss. A wide set of experiments carried out on three different datasets, GeoEye-1, WorldView-2 and WorldView-3, prove the computational gain obtained while keeping top accuracy scores compared to state-of-the-art methods, both model-based and deep-learning ones. The generality of the proposed solution has also been validated, applying the new adaptation framework to different CNN models.
- Published
- 2023
11. Panchromatic and Hyperspectral Image Fusion: Outcome of the 2022 WHISPERS Hyperspectral Pansharpening Challenge
- Author
-
Gemine Vivone, Andrea Garzelli, Yang Xu, Wenzhi Liao, and Jocelyn Chanussot
- Subjects
Remote Sensing ,Atmospheric Science ,Optical Imaging ,Pansharpening, PRISMA Images, Hyperspectral Imaging, Optical Imaging, Resolution Enhancement, Image Fusion, Remote Sensing ,PRISMA Images ,Pansharpening ,Hyperspectral Imaging ,Computers in Earth Sciences ,Resolution Enhancement ,Image Fusion - Published
- 2023
12. MPFINet: A Multilevel Parallel Feature Injection Network for Panchromatic and Multispectral Image Fusion
- Author
-
Yuting Feng, Xin Jin, Qian Jiang, Quanli Wang, Lin Liu, and Shaowen Yao
- Subjects
pansharpening ,image fusion ,remote sensing ,deep learning ,self-attention mechanism ,General Earth and Planetary Sciences - Abstract
The fusion of a high-spatial-resolution panchromatic (PAN) image and a corresponding low-resolution multispectral (MS) image can yield a high-resolution multispectral (HRMS) image, which is also known as pansharpening. Most previous methods based on convolutional neural networks (CNNs) have achieved remarkable results. However, information of different scales has not been fully mined and utilized, and still produces spectral and spatial distortion. In this work, we propose a multilevel parallel feature injection network that contains three scale levels and two parallel branches. In the feature extraction branch, a multi-scale perception dynamic convolution dense block is proposed to adaptively extract the spatial and spectral information. Then, the sufficient multilevel features are injected into the image reconstruction branch, and an attention fusion module based on the spectral dimension is designed in order to fuse shallow contextual features and deep semantic features. In the image reconstruction branch, cascaded transformer blocks are employed to capture the similarities among the spectral bands of the MS image. Extensive experiments are conducted on the QuickBird and WorldView-3 datasets to demonstrate that MPFINet achieves significant improvement over several state-of-the-art methods on both spatial and spectral quality assessments.
- Published
- 2022
- Full Text
- View/download PDF
13. ENHANCING UAV COASTAL MAPPING USING INFRARED PANSHARPENING
- Author
-
D. James, A. Collin, A. Mury, M. Letard, École pratique des hautes études (EPHE), Université Paris sciences et lettres (PSL), LIttoral ENvironnement et Sociétés - UMRi 7266 (LIENSs), Université de La Rochelle (ULR)-Centre National de la Recherche Scientifique (CNRS), Littoral, Environnement, Télédétection, Géomatique (LETG - Dinard), Littoral, Environnement, Télédétection, Géomatique UMR 6554 (LETG), Université de Caen Normandie (UNICAEN), Normandie Université (NU)-Normandie Université (NU)-Université d'Angers (UA)-École pratique des hautes études (EPHE), Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Université de Brest (UBO)-Université de Rennes 2 (UR2), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Centre National de la Recherche Scientifique (CNRS)-Institut de Géographie et d'Aménagement Régional de l'Université de Nantes (IGARUN), Université de Nantes (UN)-Université de Nantes (UN)-Université de Caen Normandie (UNICAEN), Université de Nantes (UN)-Université de Nantes (UN), Pharmacyclics LLC, Unité de recherches sur la flore pathogène dans le sol, and Institut National de la Recherche Agronomique (INRA)
- Subjects
Technology ,Multispectral ,Computer science ,[SDE.MCG]Environmental Sciences/Global Changes ,UAV ,Multispectral image ,Pansharpening ,[INFO.INFO-NE]Computer Science [cs]/Neural and Evolutionary Computing [cs.NE] ,Convolutional neural network ,[SDV.EE.ECO]Life Sciences [q-bio]/Ecology, environment/Ecosystems ,[STAT.ML]Statistics [stat]/Machine Learning [stat.ML] ,[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,Applied optics. Photonics ,14. Life underwater ,Image resolution ,Ecosystem ,Remote sensing ,[SPI.ACOU]Engineering Sciences [physics]/Acoustics [physics.class-ph] ,[SDE.IE]Environmental Sciences/Environmental Engineering ,[SHS.GEO]Humanities and Social Sciences/Geography ,Spectral bands ,Classification ,Engineering (General). Civil engineering (General) ,[SDE.ES]Environmental Sciences/Environmental and Society ,[INFO.INFO-MO]Computer Science [cs]/Modeling and Simulation ,TA1501-1820 ,Support vector machine ,[SPI.ELEC]Engineering Sciences [physics]/Electromagnetism ,Coastal ,13. Climate action ,[INFO.INFO-TI]Computer Science [cs]/Image Processing [eess.IV] ,[SDE]Environmental Sciences ,Principal component analysis ,[SPI.OPTI]Engineering Sciences [physics]/Optics / Photonic ,RGB color model ,[SDE.BE]Environmental Sciences/Biodiversity and Ecology ,[INFO.INFO-BI]Computer Science [cs]/Bioinformatics [q-bio.QM] ,TA1-2040 ,Scale (map) ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,[STAT.ME]Statistics [stat]/Methodology [stat.ME] - Abstract
Ecosystems must now cope with climate change such as rising sea levels. These major changes have a direct impact on the coastal fringe. However, in recent years, coastal ecosystems such as saltmarshes have proven their adaptive capacity. Unmanned Aerial Vehicles (UAV) are an inexpensive and easily deployable alternative which offer us the possibility to monitor these geomorphological and ecological systems, have been perfected over the years, making it possible to achieve high or even very high (VH) spectral and spatial resolution. Detection of changes at VH temporal and spatial resolution such as coastline evolution or seasonal monitoring of plant communities is facilitated. The red-green-blue (RGB) camera is the basic equipment of low-cost UAVs. Many studies have demonstrated the interest of infrared sensors for vegetation or water detection. In this original study, a pansharpening method has been developed to generate a red-edge (RE) and near infrared channel based on the VH resolution of RGB. Out of the three different pansharpening algorithms tested, Gram-Schmidt showed correlation (0.61 and 0.63 for RE and NIR channels respectively), followed by nearest neighbor diffusion and finally, principal component spectral pansharpening. The maximum likelihood, support vector machine and convolutional neural network classifiers were used to discriminate the main objects of the study area. The classification results revealed that at the classifier scale the ML outperforms the others with an overall accuracy of 80.75%. At the spectral band scale, the RE obtains the best performances with 80.04% of OA with ML and 78.34% of OA with SVM.
- Published
- 2021
14. Deep Pansharpening via 3D Spectral Super-Resolution Network and Discrepancy-Based Gradient Transfer
- Author
-
Haonan Su, Haiyan Jin, and Ce Sun
- Subjects
spectral super-resolution ,pansharpening ,discrepancy ,3D convolutional neural network ,hyperspectral images (HS) ,multispectral images (MS) ,gradient transfer ,General Earth and Planetary Sciences - Abstract
High-resolution (HR) multispectral (MS) images contain sharper detail and structure compared to the ground truth high-resolution hyperspectral (HS) images. In this paper, we propose a novel supervised learning method, which considers pansharpening as the spectral super-resolution of high-resolution multispectral images and generates high-resolution hyperspectral images. The proposed method learns the spectral mapping between high-resolution multispectral images and the ground truth high-resolution hyperspectral images. To consider the spectral correlation between bands, we build a three-dimensional (3D) convolution neural network (CNN). The network consists of three parts using an encoder–decoder framework: spatial/spectral feature extraction from high-resolution multispectral images/low-resolution (LR) hyperspectral images, feature transform, and image reconstruction to generate the results. In the image reconstruction network, we design the spatial–spectral fusion (SSF) blocks to reuse the extracted spatial and spectral features in the reconstructed feature layer. Then, we develop the discrepancy-based deep hybrid gradient (DDHG) losses with the spatial–spectral gradient (SSG) loss and deep gradient transfer (DGT) loss. The spatial–spectral gradient loss and deep gradient transfer loss are developed to preserve the spatial and spectral gradients from the ground truth high-resolution hyperspectral images and high-resolution multispectral images. To overcome the spectral and spatial discrepancy between two images, we design a spectral downsampling (SD) network and a gradient consistency estimation (GCE) network for hybrid gradient losses. In the experiments, it is seen that the proposed method outperforms the state-of-the-art methods in the subjective and objective experiments in terms of the structure and spectral preservation of high-resolution hyperspectral images.
- Published
- 2022
- Full Text
- View/download PDF
15. Local-Global Based High-Resolution Spatial-Spectral Representation Network for Pansharpening
- Author
-
Wei Huang, Ming Ju, Zhuobing Zhao, Qinggang Wu, and Erlin Tian
- Subjects
pansharpening ,transformer ,texture ,high-resolution ,depthwise separable convolution ,contextual aggregation ,General Earth and Planetary Sciences - Abstract
Due to the inability of convolutional neural networks to effectively obtain long-range information, a transformer was recently introduced into the field of pansharpening to obtain global dependencies. However, a transformer does not pay enough attention to the information of channel dimensions. To solve this problem, a local-global-based high-resolution spatial-spectral representation network (LG-HSSRN) is proposed to fully fuse local and global spatial-spectral information at different scales. In this paper, a multi-scale feature fusion (MSFF) architecture is designed to obtain the scale information of remote sensing images. Meanwhile, in order to learn spatial texture information and spectral information effectively, a local-global feature extraction (LGFE) module is proposed to capture the local and global dependencies in the source images from a spatial-spectral perspective. In addition, a multi-scale contextual aggregation (MSCA) module is proposed to weave hierarchical information with high representational power. The results of three satellite datasets show that the proposed method exhibits superior performance in terms of both spatial and spectral preservation compared to other methods.
- Published
- 2022
- Full Text
- View/download PDF
16. Variable Subpixel Convolution Based Arbitrary-Resolution Hyperspectral Pansharpening
- Author
-
Lin He, Jinhua Xie, Jun Li, Antonio Plaza, Jocelyn Chanussot, Jiawei Zhu, South China University of Technology [Guangzhou] (SCUT), China University of Geosciences [Wuhan] (CUG), Universidad de Extremadura - University of Extremadura (UEX), GIPSA - Signal Images Physique (GIPSA-SIGMAPHY), GIPSA Pôle Sciences des Données (GIPSA-PSD), Grenoble Images Parole Signal Automatique (GIPSA-lab), Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA)-Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Grenoble Alpes (UGA), Apprentissage de modèles à partir de données massives (Thoth), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire Jean Kuntzmann (LJK), Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), and ANR-19-P3IA-0003,MIAI,MIAI @ Grenoble Alpes(2019)
- Subjects
Spatial resolution ,Standards ,Task analysis ,General Earth and Planetary Sciences ,Training ,Pansharpening ,Electrical and Electronic Engineering ,Optimized production technology ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing ,Convolution - Abstract
International audience; Standard hyperspectral (HS) pansharpening relies on fusion to enhance low-resolution HS (LRHS) images to the resolution of their matching panchromatic (PAN) images, whose practical implementation is normally under a stipulation of scale invariance of the model across the training phase and the pansharpening phase. By contrast, arbitrary resolution HS (ARHS) pansharpening seeks to pansharpen LRHS images to any user-customized resolutions. For such a new HS pansharpening task, it is not feasible to train and store convolution neural network (CNN) models for all possible candidate scales, which implies that the single model acquired from the training phase should be capable of being generalized to yield HS images with any resolutions in the pansharpening phase. To address the challenge, a novel variable subpixel convolution (VSPC)-based CNN (VSPC-CNN) method following our arbitrary upsampling CNN (AU-CNN) framework is developed for ARHS pansharpening. The VSPC-CNN method comprises a two-stage elevating thread. The first stage is to improve the spatial resolution of the input HS image to that of the PAN image through a prepansharpening module, and then, a VSPC-encapsulated arbitrary scale attention upsampling (ASAU) module is cascaded for arbitrary resolution adjustment. After training with given scales, it can be generalized to pansharpen HS image to arbitrary scales under the spatial patterns invariance across the training and pansharpening phases. Experimental results from several specific VSPC-CNNs on both simulated and real HS datasets show the superiority of the proposed method.
- Published
- 2022
17. PGMAN: An Unsupervised Generative Multiadversarial Network for Pansharpening
- Author
-
Yunhong Wang, Qingjie Liu, and Huanyu Zhou
- Subjects
Atmospheric Science ,Discriminator ,Computer science ,business.industry ,QC801-809 ,Deep learning ,Multispectral image ,Generative adversarial network (GAN) ,Geophysics. Cosmic physics ,Pattern recognition ,pansharpening ,unsupervised learning ,image fusion ,Panchromatic film ,Ocean engineering ,Code (cryptography) ,Preprocessor ,Artificial intelligence ,Computers in Earth Sciences ,business ,Spatial analysis ,Image resolution ,TC1501-1800 - Abstract
Pansharpening aims at fusing a low-resolution multispectral (MS) image and a high-resolution (HR) panchromatic (PAN) image acquired by a satellite to generate an HR MS image. Many deep learning based methods have been developed in the past few years. However, since there are no intended HR MS images as references for learning, almost all of the existing methods downsample the MS and PAN images and regard the original MS images as targets to form a supervised setting for training. These methods may perform well on the down-scaled images; however, they generalize poorly to the full-resolution images. To conquer this problem, we design an unsupervised framework that is able to learn directly from the full-resolution images without any preprocessing. The model is built based on a novel generative multiadversarial network. We use a two-stream generator to extract the modality-specific features from the PAN and MS images, respectively, and develop a dual discriminator to preserve the spectral and spatial information of the inputs when performing fusion. Furthermore, a novel loss function is introduced to facilitate training under the unsupervised setting. Experiments and comparisons with other state-of-the-art methods on GaoFen-2, QuickBird, and WorldView-3 images demonstrate that the proposed method can obtain much better fusion results on the full-resolution images. Code is available. [Online]. Available: https://github.com/zhysora/PGMAN.
- Published
- 2021
18. Pansharpening PRISMA Data for Marine Plastic Litter Detection Using Plastic Indexes
- Author
-
Antonello Aiello, Vassilia Karathanassi, Konstantinos Topouzelis, Viktoria Kristollari, Enrico Barbone, Giulio Ceriola, Maria Kremezi, Nicolo Taggio, Paolo Corradi, and Pol Kolokoussis
- Subjects
010504 meteorology & atmospheric sciences ,General Computer Science ,Computer science ,hyperspectral imaging ,0211 other engineering and technologies ,pansharpening ,02 engineering and technology ,01 natural sciences ,plastic litter detection ,General Materials Science ,Spectral resolution ,Image resolution ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Remote sensing ,Pixel ,General Engineering ,Hyperspectral imaging ,PRISMA satellite data ,Panchromatic film ,marine pollution ,TK1-9971 ,indexes ,Pixelation ,Litter ,Satellite ,Electrical engineering. Electronics. Nuclear engineering - Abstract
Hyperspectral PRISMA images are new and have not yet been evaluated for their ability to detect marine plastic litter. The hyperspectral PRISMA images have a fine spectral resolution, however, their spatial resolution is not high enough to enable the discrimination of small plastic objects in the ocean. Pansharpening with the panchromatic data enhances their spatial resolution and makes their detection capabilities a technological challenge. This study exploits, for the first time, the potential of using satellite hyperspectral data in detecting small-sized marine plastic litter. Controlled experiments with plastic targets of various sizes constructed from several materials have been conducted. The required pre-processing steps have been defined and 13 pansharpening methods have been applied and evaluated for their ability to spectrally discriminate plastics from water. Among them, the PCA-based substitution efficiently separates plastic spectra from water without producing duplicate edges, or pixelation. Plastic targets with size equivalent to 8% of the original hyperspectral image pixel coverage are easily detected. The same targets can also be observed in the panchromatic image, however, they cannot be detected solely by the panchromatic information as they are confused with other appearances. Exploiting spectra derived from the pan-sharpened hyperspectral images, an index combining methodology has been developed, which enables the detection of plastic objects. Although spectra of plastic materials present similarities with water spectra, some spectral characteristics can be utilized for producing marine plastic litter indexes. Based on these indexes, the index combining methodology has successfully detected the plastic targets and differentiated them from other materials.
- Published
- 2021
19. Pansharpening via Subpixel Convolutional Residual Network
- Author
-
Chunyu Li, Yuhui Zheng, and Byeungwoo Jeon
- Subjects
Atmospheric Science ,Computer science ,Convolutional neural network (CNN) ,Geophysics. Cosmic physics ,Multispectral image ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,pansharpening ,Residual ,Convolutional neural network ,remote sensing ,Nearest-neighbor interpolation ,Computers in Earth Sciences ,TC1501-1800 ,data fusion ,QC801-809 ,business.industry ,subpixel ,Pattern recognition ,Filter (signal processing) ,Subpixel rendering ,guided filter ,Ocean engineering ,Feature (computer vision) ,Artificial intelligence ,business - Abstract
In this article, we propose a new pansharpening architecture called subpixel convolutional residual network to obtain high-resolution multispectral (MS) images. Different from previous works, we extract features from MS images in a low-resolution space and pay more attention to the balance of spectral and spatial information. Our architecture consists of two branches: the feature extraction branch and the residual branch. The former adopts a four-layer convolutional network to extract features, and then upsamples the feature maps using a subpixel convolution layer. For the latter, we combine the nearest neighbor interpolation and guided filter to yield a preliminary image with fundamental spectral and spatial information. With the outputs of the two branches, we can merge them and yield a pansharpened image. The proposed method was compared with several representative methods. The experimental results demonstrate that our method achieves high fusion accuracy while maintaining a good balance between the spectral and the spatial resolution.
- Published
- 2021
20. Attention_FPNet: Two-Branch Remote Sensing Image Pansharpening Network Based on Attention Feature Fusion
- Author
-
Yaling Wan, Long Chen, Jun Liu, Hui Liu, Jing Qian, Xiwu Zhong, Liang Gao, and Yurong Qian
- Subjects
convolutional neural network (CNN) ,Atmospheric Science ,Channel (digital image) ,Attention feature fusion (AFF) ,QC801-809 ,business.industry ,Computer science ,Geophysics. Cosmic physics ,Concatenation ,Multispectral image ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,pansharpening ,Filter (signal processing) ,image fusion ,Convolutional neural network ,Panchromatic film ,Ocean engineering ,remote sensing ,Feature (computer vision) ,Computer vision ,Artificial intelligence ,Computers in Earth Sciences ,business ,TC1501-1800 ,Spatial analysis - Abstract
Inspired by the impressive achievements of convolutional neural networks (CNNs) in various computer vision tasks and the effective role of attention mechanisms, this paper proposes a two-branch fusion network based on attention feature fusion (AFF) called Attention FPNet to solve the pansharpening problem. We reconstruct the spatial information of an image in the high-pass filter domain and fully consider the spatial information in the multispectral (MS) and panchromatic (PAN) images. At the same time, the input PAN image and the upsampled MS image are directly transmitted to the reconstructed image through a long skip connection. The spectral information of the PAN and MS images is considered to improve the spectral resolution of the fused image. It also supplements the loss of spatial information that may be caused by network deepening. Moreover, an AFF method is used to replace the existing simple channel concatenation method commonly used in pansharpening, which fully considers the relationship between different feature maps and improves the fusion quality. Through experiments on image datasets acquired by the Pleiades, SPOT-6 and Gaofen-2 satellites, the results show that this method can effectively fuse PAN and MS images and generate a fused image and outperforms existing methods.
- Published
- 2021
21. Pansharpening Via Neighbor Embedding of Spatial Details
- Author
-
Rongrong Fei, Jiangshe Zhang, Junmin Liu, Chunxia Zhang, and Changsheng Zhou
- Subjects
Atmospheric Science ,QC801-809 ,Computer science ,Geophysics. Cosmic physics ,neighbor embedding (NE) ,0211 other engineering and technologies ,pansharpening ,sparse representation (SR) ,02 engineering and technology ,Sparse approximation ,Iterative reconstruction ,Filter (signal processing) ,Ocean engineering ,0202 electrical engineering, electronic engineering, information engineering ,Image fusion ,Embedding ,020201 artificial intelligence & image processing ,Enhanced Data Rates for GSM Evolution ,Computers in Earth Sciences ,TC1501-1800 ,Image resolution ,Algorithm ,021101 geological & geomatics engineering - Abstract
The spatial details injection model has been considered as a general framework in the literature of pansharpening, and recently there have been significant advances in this framework based on sparse representation (SR) of spatial details. However, the SR-based methods have greater computational burden in estimating the sparse vectors and limited ability in detail edge preservation. In this article, we introduce the neighbor embedding (NE) instead of SR-based model and the edge-preserving filter into the spatial detail injection framework to address the aforementioned two drawbacks. By utilizing the best quality of NE, we propose the detail injection via NE (DINE) algorithm for pansharpening, and DINE+, an improved variant of DINE by using the edge-preserving filter to enhance the spatial details. Experiments carried on three datasets captured by different satellite sensors and compared with current state-of-the-art methods validate the effectiveness of the proposed methods.
- Published
- 2021
22. A Benchmarking Protocol for Pansharpening: Dataset, Preprocessing, and Quality Assessment
- Author
-
Gemine Vivone, Fabio Pacifici, Mauro Dalla Mura, and Andrea Garzelli
- Subjects
Atmospheric Science ,quality assessment ,Computer science ,Geophysics. Cosmic physics ,Multispectral image ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,pansharpening ,computer.software_genre ,image fusion ,reproducible science ,Upsampling ,remote sensing ,open source ,Benchmarking, reproducible science, open source, pansharpening, quality assessment, very high resolution optical images, image fusion, remote sensing ,Preprocessor ,Computers in Earth Sciences ,Reference implementation ,TC1501-1800 ,Protocol (science) ,Image fusion ,QC801-809 ,Benchmarking ,Panchromatic film ,Ocean engineering ,Data mining ,very high resolution optical images ,computer - Abstract
Comparative evaluation is a requirement for reproducible science and objective assessment of new algorithms. Reproducible research in the field of pansharpening of very high resolution images is a difficult task due to the lack of openly available reference datasets and protocols. The contribution of this article is threefold, and it defines a benchmarking framework to evaluate pansharpening algorithms. First, it establishes a reference dataset, named PAirMax, composed of 14 panchromatic and multispectral image pairs collected over heterogeneous landscapes by different satellites. Second, it standardizes various image preprocessing steps, such as filtering, upsampling, and band coregistration, by providing a reference implementation. Third, it details the quality assessment protocols for reproducible algorithm evaluation.
- Published
- 2021
23. Fusion of hyperspectral and panchromatic data in the reflective domain
- Author
-
Constans, Yohann, Institut Supérieur de l'Aéronautique et de l'Espace, Briottet, Xavier, and Deville, Yannick
- Subjects
Fusion d'images ,Domaine réflectif ,Hyperspectral ,Reflective domain ,Pixels mixtes ,Image fusion ,Pansharpening ,Panchromatique ,Mixed pixels ,Panchromatic - Abstract
Les capteurs satellitaires ne pouvant acquérir des images d'observation de la Terre à hautes résolutions spatiale et spectrale, une solution consiste à combiner une image panchromatique (PAN) à haute résolution spatiale avec une image hyperspectrale (HS) à haute résolution spectrale, pour générer une nouvelle image hautement résolue spatialement et spectralement. Ce procédé de fusion, appelé pansharpening HS, présente toutefois certaines limitations, parmi lesquelles la gestion des pixels HS mixtes, particulièrement présents en milieu urbain.Cette thèse a pour objectif de développer et valider une nouvelle méthode de pansharpening HS dans le domaine réflectif [0,4 - 2,5 µm] optimisant la reconstruction des pixels mixtes. Pour ce faire, une méthode de la littérature appelée Spatially Organized Spectral Unmixing (SOSU) a été choisie comme point de départ. Elle est basée sur des étapes de prétraitement de démélange spectral et de réorganisation spatiale des pixels mixtes, et une étape de fusion appelée Gain.Afin d'évaluer les méthodes de fusion, des jeux de données simulés présentant plusieurs niveaux de complexité spatiale et acquis par différents instruments ont été construits à partir de données aéroportées existantes. D'autre part, un protocole robuste d'évaluation de performances a été proposé. Il est basé sur le protocole de Wald et l'application de critères de qualité à différentes échelles spatiales et sur différents domaines spectraux, et il est complété par un produit à valeur ajoutée (cartes d'occupation des sols par classification supervisée).Des améliorations ont été apportées à SOSU pour l'adapter progressivement à des scènes de complexité spatiale élevée. Une nouvelle approche de réorganisation spatiale par analyse combinatoire a été proposée pour le traitement des milieux agricoles à péri-urbains. Des améliorations supplémentaires ont été apportées pour le traitement des milieux urbains, en modélisant notamment l'analyse combinatoire comme un problème d'optimisation, et ont conduit à la méthode Combinatorial OptimisatioN for 2D ORganisation (CONDOR). Les performances de cette méthode ont été évaluées et comparées à celles de méthodes de référence. Elles ont révélé des améliorations visuelles et numériques de la qualité de la reconstruction, et ont montré que la limitation la plus importante provient de la non-représentation du domaine SWIR [1,0 - 2,5 µm] dans l'image PAN en entrée de la fusion.Un nouveau choix d'instrumentation, reposant sur l'utilisation d'une seconde voie PAN dans le domaine SWIR II [2,0 - 2,5 µm], a ainsi été introduit pour dépasser cette limitation. Les méthodes Gain-2P et CONDOR-2P, extensions des méthodes Gain et CONDOR prenant en compte cette seconde voie PAN, ont été développées. L'analyse des résultats a révélé l'apport conséquent de ces deux méthodes étendues (jusqu'à 60 % et 45 % d'amélioration par rapport à leurs versions initiales sur des données respectivement péri-urbaines et urbaines), ainsi que l'amélioration de la qualité de l'image fusionnée avec CONDOR-2P par rapport à Gain-2P (jusqu'à 9 % d'amélioration).Enfin, une étude de sensibilité a été menée afin d'évaluer la robustesse des méthodes proposées vis-à-vis des défauts et caractéristiques instrumentaux (rapport de résolutions spatiales, déregistration, bruit et fonction de transfert de modulation), en choisissant des configurations représentatives des instruments satellitaires existants. Malgré la sensibilité de l'ensemble des méthodes aux différents paramètres, les analyses ont montré que CONDOR-2P obtient quasi-systématiquement la meilleure qualité de reconstruction, et se révèle particulièrement robuste vis-à-vis de l'augmentation du rapport de résolutions spatiales (10 % d'amélioration par rapport à Gain-2P pour une valeur de 8 en milieu péri-urbain). As satellite sensors cannot acquire Earth observation images with high spatial and spectral resolutions, a solution is combining a high spatial resolution panchromatic (PAN) image with a high spectral resolution hyperspectral (HS) image, to generate a new image with high resolution in both dimensions. This fusion process, called HS pansharpening, suffers from some limitations, including mixed pixel handling. Such pixels are particularly abundant in urban environment.The objective of this thesis is to develop and validate a new HS pansharpening method in the [0.4 - 2.5 µm] spectral domain optimising the reconstruction of mixed pixels. To this end, one method from the literature, called Spatially Organized Spectral Unmixing (SOSU), has been chosen as a starting point. It is based on mixed pixel preprocessing steps including spectral unmixing and spatial reorganisation, and a fusion step called Gain.To evaluate fusion methods, simulated datasets presenting several spatial complexity levels and acquired by different instruments have been constructed from existing airborne data. In addition, a robust performance assessment protocol has been proposed. It is based on Wald's protocol and quality criteria applied at different spatial scales and to different spectral domains, and it is supplemented by additional ground truth (land cover maps).Improvements have been made to gradually adapt SOSU to high spatial complexity scenes. Firstly, a new spatial reorganisation approach by combinatorial analysis has been proposed for the processing of agricultural to peri-urban environments. Additional improvements have been made for the processing of urban environments by modeling the combinatorial analysis as an optimisation problem which led to the Combinatorial OptimisatioN for 2D ORganisation (CONDOR) method. This method has been evaluated and compared with reference methods in terms of performance. Analysis of the results has revealed visual and numerical enhancements of fusion quality, and have shown that the most important limitation is related to the non-representation of the [1.0 - 2.5 µm] SWIR domain in the PAN input image.A new instrument concept was introduced to overcome this limitation, by adding a second PAN channel in the [2.0 - 2.5 µm] SWIR II domain. The Gain-2P and CONDOR-2P methods, extensions of the Gain and CONDOR methods taking into account this second PAN channel, have been developed. Analysis of the results from these two extended methods revealed their significant contribution (up to 60 % and 45 % of improvement as compared with their initial versions, respectively on peri-urban and urban data), as well as an enhanced quality of the image fused by CONDOR-2P as compared with Gain-2P (up to a 9 % improvement).Finally, a sensitivity study has been performed to evaluate the robustness of the proposed methods with respect to instrumental defaults and characteristics (spatial resolution ratio, deregistration, noise and modulation transfer function), using configurations representative of existing satellite instruments. Despite the sensitivity of all methods to the different parameters, analyses have shown that CONDOR-2P almost systematically provides best reconstruction quality, and is particularly robust to the increasing of the spatial resolution ratio (10 % of improvement as compared with Gain-2P for a 8 ratio in peri-urban environment).
- Published
- 2022
24. Multiband Remote Sensing Image Pansharpening Based on Dual-Injection Model
- Author
-
Wei Tu, Lei Wu, Shuying Huang, Hangyuan Lu, Yong Yang, and Weiguo Wan
- Subjects
Atmospheric Science ,Fusion image ,injection gains ,Computer science ,QC801-809 ,Multispectral image ,Geophysics. Cosmic physics ,0211 other engineering and technologies ,pansharpening ,sparse representation (SR) ,02 engineering and technology ,Dual injection ,Image (mathematics) ,Panchromatic film ,Dual-injection model ,Adaptive integration ,Upsampling ,Ocean engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computers in Earth Sciences ,TC1501-1800 ,021101 geological & geomatics engineering ,Remote sensing - Abstract
Pansharpening exploits the high-frequency component (HFC) of panchromatic (PAN) images to restore the spatial-resolution of the corresponding multispectral (MS) image. In this article, a dual-injection model-based multiband remote sensing image pansharpening method is presented that focuses on how to correctly use the HFC to improve the MS image for obtaining a high-spatial resolution and MS image. The model is based on a two-step HFC injection algorithm with two different injection gains. In the first step, an HFC is reconstructed with sparse theory, and an injection gain based on the relationship between PAN and MS images is developed. Employing the previous injection gain and the reconstructed HFC on an upsampling MS image, an improved LRMS (ILRMS) image is then produced. In the second step, another injection gain based on the differences and similarities between the PAN and MS images is designed. With the help of this injection gain, the fusion image is achieved via the adaptive integration of the ILRMS image and the HFC from the PAN image. Experiments confirm that the proposed method is more effective than some popular widely used pansharpening methods.
- Published
- 2020
25. A New Variational Approach Based on Proximal Deep Injection and Gradient Intensity Similarity for Spatio-Spectral Image Fusion
- Author
-
Jia-Qing Miao, Liang-Jian Deng, Gemine Vivone, Jin-Fan Hu, Xi-Le Zhao, Zhong-Cheng Wu, and Ting-Zhu Huang
- Subjects
Optimization ,Atmospheric Science ,Similarity (geometry) ,Generalization ,Computer science ,Multispectral image ,Geophysics. Cosmic physics ,0211 other engineering and technologies ,pansharpening ,02 engineering and technology ,gradient intensity similarity ,Regularization (mathematics) ,Convolutional neural network ,image fusion ,variational approaches ,dynamic gradient sparsity ,remote sensing ,Convergence (routing) ,0202 electrical engineering, electronic engineering, information engineering ,Training ,Computers in Earth Sciences ,Image resolution ,TC1501-1800 ,021101 geological & geomatics engineering ,Image fusion ,Spatial resolution ,QC801-809 ,Data models ,Deep convolutional neural networks (DCNN) ,Ocean engineering ,020201 artificial intelligence & image processing ,Convolutional neural networks ,Algorithm - Abstract
Pansharpening is a very debated spatio-spectral fusion problem. It refers to the fusion of a high spatial resolution panchromatic image with a lower spatial but higher spectral resolution multispectral image in order to obtain an image with high resolution in both the domains. In this article, we propose a novel variational optimization-based (VO) approach to address this issue incorporating the outcome of a deep convolutional neural network (DCNN). This solution can take advantages of both the paradigms. On one hand, higher performance can be expected introducing machine learning (ML) methods based on the training by examples philosophy into VO approaches. On other hand, the combination of VO techniques with DCNNs can aid the generalization ability of these latter. In particular, we formulate a $\ell _2$ -based proximal deep injection term to evaluate the distance between the DCNN outcome, and the desired high spatial resolution multispectral image. This represents the regularization term for our VO model. Furthermore, a new data fitting term measuring the spatial fidelity is proposed. Finally, the proposed convex VO problem is efficiently solved by exploiting the framework of the alternating direction method of multipliers (ADMM), thus guaranteeing the convergence of the algorithm. Extensive experiments both on simulated, and real datasets demonstrate that the proposed approach can outperform state-of-the-art spatio-spectral fusion methods, even showing a significant generalization ability. Please find the project page at https://liangjiandeng.github.io/Projects_Res/DMPIF_2020jstars.html .
- Published
- 2020
26. Two-Stage Pansharpening Based on Multi-Level Detail Injection Network
- Author
-
Shaosheng Fan, Jianwen Hu, and Chenguang Du
- Subjects
010504 meteorology & atmospheric sciences ,General Computer Science ,Computer science ,Multispectral image ,detail injection block ,0211 other engineering and technologies ,convolutional neural network ,Pansharpening ,02 engineering and technology ,01 natural sciences ,Convolutional neural network ,Spectral line ,Distortion ,General Materials Science ,Image resolution ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Block (data storage) ,business.industry ,Deep learning ,General Engineering ,Pattern recognition ,Panchromatic film ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,business ,lcsh:TK1-9971 ,residual learning - Abstract
Pansharpening is an effective technology to obtain high resolution multispectral (HRMS) images by fusing low resolution multispectral (LRMS) images and high resolution panchromatic (PAN) images. With the rapid development of deep learning, some pansharpening methods based on deep learning have been proposed. Although fused images are greatly improved, there are still some areas for improvement. For example, the spectral preservation is not good enough and the details of fused images are not rich enough. To address the above problems, a two-stage pansharpening method based on convolutional neural network (CNN) is proposed. In the first stage, image super-resolution technology with residual block is used to enhance LRMS. In order to preserve spectra, inspired by the SAM (spectral angle mapper) index, a new spectral loss function is proposed. The second stage is the fusion stage. Detail injection block is proposed by combining detail injection and CNN in this stage. Experiments on WorldView2 and GeoEye1 images demonstrate that our fused images present more spatial details and better spectra by comparing with existing methods.
- Published
- 2020
27. Pansharpening via Unsupervised Convolutional Neural Networks
- Author
-
Jiangan Xie, Yong Feng, Shangbo Zhou, and Shuyue Luo
- Subjects
Atmospheric Science ,Computer science ,Multispectral image ,Geophysics. Cosmic physics ,0211 other engineering and technologies ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Pansharpening ,Convolutional neural network ,Image (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,unsupervised ,Computers in Earth Sciences ,Representation (mathematics) ,Spatial analysis ,TC1501-1800 ,021101 geological & geomatics engineering ,Network architecture ,business.industry ,QC801-809 ,Supervised learning ,Pattern recognition ,Convolutional neural networks (CNNs) ,Panchromatic film ,Ocean engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
Pansharpening is normally utilized to take full advantage of all the available spectral and spatial information that are derived from a low-spatial-resolution multispectral (MS) image and its associated high-spatial-resolution (HR) panchromatic (PAN) image, respectively, producing a fused MS image with high spectral and spatial resolutions. Many methods have been recently developed based on convolutional neural networks (CNNs) for the pansharpening task, but most of them still have some drawbacks: 1) The information cannot efficiently flow in their simple stacked convolutional architectures, thereby hindering the representation ability of the networks. 2) They are commonly trained using supervised learning, which does not only require an extra effort to produce the simulated training data, but can also lead to scale-related problems in the fusion results. In this article, we propose a novel unsupervised CNN-based pansharpening method to overcome these limitations. Specifically, we design an iterative network architecture, in which a PAN-guided strategy and a set of skip connections are adopted to continuously extract and fuse the features from the input, thus enhancing the information reuse and transmission. Besides, we propose a new loss function for unsupervised training in which the relationships between the input MS and PAN images and the fused MS image are used to design the spatial constrains and spectral consistency, respectively. The typical quality index with no-reference is also added to this function to further adjust the spectral and spatial qualities. The designed loss function allows the network to be learned only on input images, without any hand-crafted labels (reference HR MS image). We evaluated the effectiveness of our designed network architecture and the combined loss function, and the experiments testify that our unsupervised strategy can also obtain promising results with minor spectral and spatial distortions compared with other traditional and supervised methods.
- Published
- 2020
28. Image Restoration for the MRA-Based Pansharpening Method
- Author
-
Jiao Jiao and Lingda Wu
- Subjects
Deblurring ,General Computer Science ,Blind deblurring ,business.industry ,Computer science ,Multiresolution analysis ,Multispectral image ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Engineering ,pansharpening ,Filter (signal processing) ,Ringing artifacts ,image restoration ,multiresolution analysis ,Panchromatic film ,Tikhonov regularization ,iterative back-projection ,General Materials Science ,Computer vision ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,business ,lcsh:TK1-9971 ,Image restoration - Abstract
By merging high-resolution panchromatic (PAN) images with low-resolution multispectral (MS) images, high-resolution MS images with complementary information can be obtained, i.e., pansharpening. Multiresolution analysis (MRA) methods have attracted widespread attention in the pansharpening field. The spatial detail information injected into MS images is extracted from PAN images by MRA tools. Since such methods often suffer from spatial distortion and ringing artifacts, a restoration algorithm based on blind deblurring and iterative back-projection (IBP) is proposed in this paper. First, a blind deblurring method based on the Tikhonov regular constraint model is used to estimate the blurring filter. Second, spatial details extracted from PAN images are modulated into MS images using a high-pass modulation (HPM) framework, and then fusion images are spatially enhanced based on the modulation results and blurring filter. Finally, the IBP technique is used to project the reconstruction error back to iteratively update and optimize the desired high-resolution images. Experiments are performed on data sets acquired by different satellites at full and reduced resolution, and eight state-of-the-art MRA-based pansharpening methods are used for validation. Compared to the enhanced back-projection (EBP) algorithm, the proposed restoration method is better in improving spectral and spatial quality of MRA-based pansharpening. The results indicate the effectiveness and superiority of the proposed method.
- Published
- 2020
29. Pansharpening Based on Low-Rank Fuzzy Fusion and Detail Supplement
- Author
-
Chenxu Wan, Hangyuan Lu, Weiguo Wan, Shuying Huang, and Yong Yang
- Subjects
Atmospheric Science ,Similarity (geometry) ,business.industry ,Computer science ,QC801-809 ,Multispectral image ,Geophysics. Cosmic physics ,Pattern recognition ,detail supplementation ,pansharpening ,Fuzzy logic ,Panchromatic film ,Matrix decomposition ,Ocean engineering ,Detail-injection ,Fusion rules ,Artificial intelligence ,fuzzy logic ,Computers in Earth Sciences ,business ,Image resolution ,TC1501-1800 ,Sparse matrix - Abstract
Pansharpening is a technique used to reconstruct a high-resolution (HR) multispectral (MS) image by combining an HR panchromatic (PAN) image with a low-resolution MS image. In recent years, the detail-injection model has demonstrated excellent performance in pansharpening, thus receiving wide attention. Obtaining appropriate details is vital for the detail-injection model. Therefore, this article presents a detail optimization approach to obtain more precise high-frequency (HF) details for pansharpening. The proposed method comprises two steps. In the first step, we design a low-rank fuzzy fusion model to fuse the HF details of the PAN and MS images. In this model, the high frequencies of the PAN and upsampled MS images are decomposed into low-rank and sparse components, and the corresponding fusion rules are designed according to their characteristics. Because some details of the PAN image are replaced with those of the MS image, using them directly as injection details may result in redundant information or spatial distortion. To solve this problem and further optimize the details, in the second step, we construct an adaptive detail supplement model. Based on the similarity and correlation between the fused HF and the original HF of the PAN image, the fused details are supplemented to obtain the final injection details. Experimental results on the IKONOS, Pleiades, QuickBird, and WorldView-2 datasets demonstrate that the proposed algorithm is better than the state-of-the-art methods in maintaining spectral information and improving spatial details.
- Published
- 2020
30. A Two-Stage Pansharpening Method for the Fusion of Remote-Sensing Images
- Author
-
Yazhen Wang, Guojun Liu, Rui Zhang, and Junmin Liu
- Subjects
General Earth and Planetary Sciences ,pansharpening ,component substitution ,global sparse gradient ,two-stage - Abstract
The pansharpening (PS) of remote-sensing images aims to fuse a high-resolution panchromatic image with several low-resolution multispectral images for obtaining a high-resolution multispectral image. In this work, a two-stage PS model is proposed by integrating the ideas of component replacement and the variational method. The global sparse gradient of the panchromatic image is extracted by variational method, and the weight function is constructed by combining the gradient of multispectral image in which the global sparse gradient can provide more robust gradient information. Furthermore, we refine the results in order to reduce spatial and spectral distortions. Experimental results show that our method had high generalization ability for QuickBird, Gaofen-1, and WorldView-4 satellite data. Experimental results evaluated by seven metrics demonstrate that the proposed two-stage method enhanced spatial details subjective visual effects better than other state-of-the-art methods do. At the same time, in the process of quantitative evaluation, the method in this paper had high improvement compared with that other methods, and some of them can reach a maximal improvement of 60%.
- Published
- 2022
- Full Text
- View/download PDF
31. A Three Stages Detail Injection Network for Remote Sensing Images Pansharpening
- Author
-
Yuanyuan Wu, Siling Feng, Cong Lin, Haijie Zhou, and Mengxing Huang
- Subjects
General Earth and Planetary Sciences ,multispectral images ,pansharpening ,convolutional neural network ,cascade cross-scale ,detail compensation mechanism - Abstract
Multispectral (MS) pansharpening is crucial to improve the spatial resolution of MS images. MS pansharpening has the potential to provide images with high spatial and spectral resolutions. Pansharpening technique based on deep learning is a topical issue to deal with the distortion of spatio-spectral information. To improve the preservation of spatio-spectral information, we propose a novel three-stage detail injection pansharpening network (TDPNet) for remote sensing images. First, we put forward a dual-branch multiscale feature extraction block, which extracts four scale details of panchromatic (PAN) images and the difference between duplicated PAN and MS images. Next, cascade cross-scale fusion (CCSF) employs fine-scale fusion information as prior knowledge for the coarse-scale fusion to compensate for the lost information during downsampling and retain high-frequency details. CCSF combines the fine-scale and coarse-scale fusion based on residual learning and prior information of four scales. Last, we design a multiscale detail compensation mechanism and a multiscale skip connection block to reconstruct injecting details, which strengthen spatial details and reduce parameters. Abundant experiments implemented on three satellite data sets at degraded and full resolutions confirm that TDPNet trades off the spectral information and spatial details and improves the fidelity of sharper MS images. Both the quantitative and subjective evaluation results indicate that TDPNet outperforms the compared state-of-the-art approaches in generating MS images with high spatial resolution.
- Published
- 2022
- Full Text
- View/download PDF
32. Remote Sensing Pansharpening by Full-Depth Feature Fusion
- Author
-
Zi-Rong Jin, Yu-Wei Zhuo, Tian-Jing Zhang, Xiao-Xu Jin, Shuaiqi Jing, and Liang-Jian Deng
- Subjects
Science ,convolutional neural networks ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Earth and Planetary Sciences ,pansharpening ,full-depth feature fusion - Abstract
Pansharpening is an important yet challenging remote sensing image processing task, which aims to reconstruct a high-resolution (HR) multispectral (MS) image by fusing a HR panchromatic (PAN) image and a low-resolution (LR) MS image. Though deep learning (DL)-based pansharpening methods have achieved encouraging performance, they are infeasible to fully utilize the deep semantic features and shallow contextual features in the process of feature fusion for a HR-PAN image and LR-MS image. In this paper, we propose an efficient full-depth feature fusion network (FDFNet) for remote sensing pansharpening. Specifically, we design three distinctive branches called PAN-branch, MS-branch, and fusion-branch, respectively. The features extracted from the PAN and MS branches will be progressively injected into the fusion branch at every different depth to make the information fusion more broad and comprehensive. With this structure, the low-level contextual features and high-level semantic features can be characterized and integrated adequately. Extensive experiments on reduced- and full-resolution datasets acquired from WorldView-3, QuickBird, and GaoFen-2 sensors demonstrate that the proposed FDFNet only with less than 100,000 parameters performs better than other detail injection-based proposals and several state-of-the-art approaches, both visually and quantitatively.
- Published
- 2022
33. Multispectral Characteristics of Glacier Surface Facies (Chandra-Bhaga Basin, Himalaya, and Ny-Ålesund, Svalbard) through Investigations of Pixel and Object-Based Mapping Using Variable Processing Routines
- Author
-
Shridhar D. Jawak, Sagar F. Wankhede, Alvarinho J. Luis, and Keshava Balakrishna
- Subjects
General Earth and Planetary Sciences ,surface facies of glaciers ,pixel-based image analysis ,geographic object-based image analysis ,atmospheric corrections ,pansharpening ,image processing routines - Abstract
Fundamental image processing methods, such as atmospheric corrections and pansharpening, influence the signal of the pixel. This morphs the spectral signature of target features causing a change in both the final spectra and the way different mapping methods may assign thematic classes. In the current study, we aim to identify the variations induced by popular image processing methods in the spectral reflectance and final thematic maps of facies. To this end, we have tested three different atmospheric corrections: (a) Quick Atmospheric Correction (QUAC), (b) Dark Object Subtraction (DOS), and (c) Fast Line-of-Sight Atmospheric Analysis of Hypercubes (FLAASH), and two pansharpening methods: (a) Hyperspherical Color Sharpening (HCS) and (b) Gram–Schmidt (GS). WorldView-2 and WorldView-3 satellite images over Chandra-Bhaga Basin, Himalaya, and Ny-Ålesund, Svalbard are tested via spectral subsets in traditional (BGRN1), unconventional (CYRN2), visible to near-infrared (VNIR), and the complete available spectrum (VNIR_SWIR). Thematic mapping was comparatively performed using 12 pixel-based (PBIA) algorithms and 3 object-based (GEOBIA) rule sets. Thus, we test the impact of varying image processing routines, effectiveness of specific spectral bands, utility of PBIA, and versatility of GEOBIA for mapping facies. Our findings suggest that the image processing routines exert an extreme impact on the end spectral reflectance. DOS delivers the most reliable performance (overall accuracy = 0.64) averaged across all processing schemes. GEOBIA delivers much higher accuracy when the QUAC correction is employed and if the image is enhanced by GS pansharpening (overall accuracy = 0.79). SWIR bands have not enhanced the classification results and VNIR band combination yields superior performance (overall accuracy = 0.59). The maximum likelihood classifier (PBIA) delivers consistent and reliable performance (overall accuracy = 0.61) across all processing schemes and can be used after DOS correction without pansharpening, as it deteriorates spectral information. GEOBIA appears to be robust against modulations in atmospheric corrections but is enhanced by pansharpening. When utilizing GEOBIA, we find that a combination of spatial and spectral object features (rule set 3) delivers the best performance (overall accuracy = 0.86), rather than relying only on spectral (rule set 1) or spatial (rule set 2) object features. The multiresolution segmentation parameters used here may be transferable to other very high resolution (VHR) VNIR mapping of facies as it yielded consistent objects across all processing schemes.
- Published
- 2022
34. Reproducibility of Pansharpening Methods and Quality Indexes versus Data Formats
- Author
-
Andrea Garzelli, Luciano Alparone, Bruno Aiazzi, and Alberto Arienzo
- Subjects
Hypercomplex number ,Similarity (geometry) ,business.industry ,Computer science ,Science ,Multiresolution analysis ,Multispectral image ,data formats, multispectral images, pansharpening, remote sensing, reproducibility, statistical quality indexes ,Pattern recognition ,pansharpening ,[object Object] ,remote sensing ,Bayesian multivariate linear regression ,Component (UML) ,Radiance ,data formats ,statistical quality indexes ,General Earth and Planetary Sciences ,multispectral images ,Artificial intelligence ,business ,Focus (optics) ,reproducibility - Abstract
In this work, we investigate whether the performance of pansharpening methods depends on their input data format; in the case of spectral radiance, either in its original floating-point format or in an integer-packed fixed-point format. It is theoretically proven and experimentally demonstrated that methods based on multiresolution analysis are unaffected by the data format. Conversely, the format is crucial for methods based on component substitution, unless the intensity component is calculated by means of a multivariate linear regression between the upsampled bands and the lowpass-filtered Pan. Another concern related to data formats is whether quality measurements, carried out by means of normalized indexes depend on the format of the data on which they are calculated. We will focus on some of the most widely used with-reference indexes to provide a novel insight into their behaviors. Both theoretical analyses and computer simulations, carried out on GeoEye-1 and WorldView-2 datasets with the products of nine pansharpening methods, show that their performance does not depend on the data format for purely radiometric indexes, while it significantly depends on the data format, either floating-point or fixed-point, for a purely spectral index, like the spectral angle mapper. The dependence on the data format is weak for indexes that balance the spectral and radiometric similarity, like the family of indexes, Q2n, based on hypercomplex algebra.
- Published
- 2021
- Full Text
- View/download PDF
35. Convolutional Neural Network for Pansharpening with Spatial Structure Enhancement Operator
- Author
-
Yuhui Zheng, Jianwei Zhang, Yan Zhang, and Weiwei Huang
- Subjects
Computer science ,business.industry ,Science ,Multispectral image ,Linear model ,convolutional neural network ,Sobel operator ,Pattern recognition ,pansharpening ,spatial structure enhancement ,Convolutional neural network ,Panchromatic film ,Operator (computer programming) ,Feature (computer vision) ,General Earth and Planetary Sciences ,Artificial intelligence ,business ,Focus (optics) - Abstract
Pansharpening aims to fuse the abundant spectral information of multispectral (MS) images and the spatial details of panchromatic (PAN) images, yielding a high-spatial-resolution MS (HRMS) image. Traditional methods only focus on the linear model, ignoring the fact that degradation process is a nonlinear inverse problem. Due to convolutional neural networks (CNNs) having an extraordinary effect in overcoming the shortcomings of traditional linear models, they have been adapted for pansharpening in the past few years. However, most existing CNN-based methods cannot take full advantage of the structural information of images. To address this problem, a new pansharpening method combining a spatial structure enhancement operator with a CNN architecture is employed in this study. The proposed method uses the Sobel operator as an edge-detection operator to extract abundant high-frequency information from the input PAN and MS images, hence obtaining the abundant spatial features of the images. Moreover, we utilize the CNN to acquire the spatial feature maps, preserving the information in both the spatial and spectral domains. Simulated experiments and real-data experiments demonstrated that our method had excellent performance in both quantitative and visual evaluation.
- Published
- 2021
36. Effect of Image-Processing Routines on Geographic Object-Based Image Analysis for Mapping Glacier Surface Facies from Svalbard and the Himalayas
- Author
-
Shridhar D. Jawak, Sagar F. Wankhede, Alvarinho J. Luis, and Keshava Balakrishna
- Subjects
geographic object-based image analysis ,atmospheric correction ,pansharpening ,WorldView-2 ,Ny-Ålesund ,Chandra–Bhaga basin ,glacier surface facies ,General Earth and Planetary Sciences - Abstract
Advancements in remote sensing have led to the development of Geographic Object-Based Image Analysis (GEOBIA). This method of information extraction focuses on segregating correlated pixels into groups for easier classification. This is of excellent use in analyzing very-high-resolution (VHR) data. The application of GEOBIA for glacier surface mapping, however, necessitates multiple scales of segmentation and input of supportive ancillary data. The mapping of glacier surface facies presents a unique problem to GEOBIA on account of its separable but closely matching spectral characteristics and often disheveled surface. Debris cover can induce challenges and requires additions of slope, temperature, and short-wave infrared data as supplements to enable efficient mapping. Moreover, as the influence of atmospheric corrections and image sharpening can derive variations in the apparent surface reflectance, a robust analysis of the effects of these processing routines in a GEOBIA environment is lacking. The current study aims to investigate the impact of three atmospheric corrections, Dark Object Subtraction (DOS), Quick Atmospheric Correction (QUAC), and Fast Line-of-Sight Atmospheric Analysis of Hypercubes (FLAASH), and two pansharpening methods, viz., Gram–Schmidt (GS) and Hyperspherical Color Sharpening (HCS), on the classification of surface facies using GEOBIA. This analysis is performed on VHR WorldView-2 imagery of selected glaciers in Ny-Ålesund, Svalbard, and Chandra–Bhaga basin, Himalaya. The image subsets are segmented using multiresolution segmentation with constant parameters. Three rule sets are defined: rule set 1 utilizes only spectral information, rule set 2 contains only spatial and contextual features, and rule set 3 combines both spatial and spectral attributes. Rule set 3 performs the best across all processing schemes with the highest overall accuracy, followed by rule set 1 and lastly rule set 2. This trend is observed for every image subset. Among the atmospheric corrections, DOS displays consistent performance and is the most reliable, followed by QUAC and FLAASH. Pansharpening improved overall accuracy and GS performed better than HCS. The study reports robust segmentation parameters that may be transferable to other VHR-based glacier surface facies mapping applications. The rule sets are adjusted across the processing schemes to adjust to the change in spectral characteristics introduced by the varying routines. The results indicate that GEOBIA for glacier surface facies mapping may be less prone to the differences in spectral signatures introduced by different atmospheric corrections but may respond well to increasing spatial resolution. The study highlighted the role of spatial attributes for mapping fine features, and in combination with appropriate spectral features may enhance thematic classification.
- Published
- 2022
37. Detail Injection-Based Deep Convolutional Neural Networks for Pansharpening
- Author
-
Jocelyn Chanussot, Cheng Jin, Gemine Vivone, Liang-Jian Deng, University of Electronic Science and Technology of China (UESTC), Institute of Methodologies for Environmental Analysis of the National Research Council (IMAA), Consiglio Nazionale delle Ricerche [Roma] (CNR), GIPSA - Signal Images Physique (GIPSA-SIGMAPHY), GIPSA Pôle Sciences des Données (GIPSA-PSD), Grenoble Images Parole Signal Automatique (GIPSA-lab), Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA)-Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Grenoble Alpes (UGA), and ANR-19-P3IA-0003,MIAI,MIAI @ Grenoble Alpes(2019)
- Subjects
multiresolution analysis (MRA) ,010504 meteorology & atmospheric sciences ,Computer science ,Multiresolution analysis ,Multispectral image ,Component substitution (CS) ,pansharpening ,02 engineering and technology ,01 natural sciences ,Convolutional neural network ,image fusion ,Image (mathematics) ,remote sensing ,Component (UML) ,0202 electrical engineering, electronic engineering, information engineering ,deep convolutional neural network (DCNN) ,Electrical and Electronic Engineering ,Image resolution ,0105 earth and related environmental sciences ,PAN-SHARPENING ,WAVELET TRANSFORM ,business.industry ,Pattern recognition ,Panchromatic film ,MODEL ,Nonlinear system ,RESOLUTION ,General Earth and Planetary Sciences ,020201 artificial intelligence & image processing ,Artificial intelligence ,METHODIMAGE FUSION ,business ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing - Abstract
International audience; The fusion of high spatial resolution panchromatic (PAN) data with simultaneously acquired multispectral (MS) data with the lower spatial resolution is a hot topic, which is often called pansharpening. In this article, we exploit the combination of machine learning techniques and fusion schemes introduced to address the pansharpening problem. In particular, deep convolutional neural networks (DCNNs) are proposed to solve this issue. The latter is combined first with the traditional component substitution and multiresolution analysis fusion schemes in order to estimate the nonlinear injection models that rule the combination of the upsampled low-resolution MS image with the extracted details exploiting the two philosophies. Furthermore, inspired by these two approaches, we also developed another DCNN for pansharpening. This is fed by the direct difference between the PAN image and the upsampled low-resolution MS image. Extensive experiments conducted both at reduced and full resolutions demonstrate that this latter convolutional neural network outperforms both the other detail injection-based proposals and several state-of-the-art pansharpening methods.
- Published
- 2021
38. Pansharpening based on convolutional autoencoder and multi-scale guided filter
- Author
-
Ala A. Alsanabani, Ahmad Al Smadi, Atif Mehmood, Shuyuan Yang, Min Wang, and Zhang Kai
- Subjects
Guided image filtering ,TK7800-8360 ,Biometrics ,Computer science ,020209 energy ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pansharpening ,02 engineering and technology ,Convolutional neural network ,Dimension (vector space) ,0202 electrical engineering, electronic engineering, information engineering ,sort ,Electrical and Electronic Engineering ,Adaptive intensity-hue-saturation AIHS ,business.industry ,Pattern recognition ,Filter (signal processing) ,Autoencoder ,Panchromatic film ,Signal Processing ,Pattern recognition (psychology) ,Convolutional autoencoder ,020201 artificial intelligence & image processing ,Artificial intelligence ,Electronics ,business ,Information Systems - Abstract
In this paper, we propose a pansharpening method based on a convolutional autoencoder. The convolutional autoencoder is a sort of convolutional neural network (CNN) and objective to scale down the input dimension and typify image features with high exactness. First, the autoencoder network is trained to reduce the difference between the degraded panchromatic image patches and reconstruction output original panchromatic image patches. The intensity component, which is developed by adaptive intensity-hue-saturation (AIHS), is then delivered into the trained convolutional autoencoder network to generate an enhanced intensity component of the multi-spectral image. The pansharpening is accomplished by improving the panchromatic image from the enhanced intensity component using a multi-scale guided filter; then, the semantic detail is injected into the upsampled multi-spectral image. Real and degraded datasets are utilized for the experiments, which exhibit that the proposed technique has the ability to preserve the high spatial details and high spectral characteristics simultaneously. Furthermore, experimental results demonstrated that the proposed study performs state-of-the-art results in terms of subjective and objective assessments on remote sensing data.
- Published
- 2021
39. MDCwFB: A Multilevel Dense Connection Network with Feedback Connections for Pansharpening
- Author
-
Minghao Xiang, Weisheng Li, and Xuesong Liang
- Subjects
010504 meteorology & atmospheric sciences ,Computer science ,Science ,Multispectral image ,0211 other engineering and technologies ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,convolutional neural network ,feedback ,pansharpening ,02 engineering and technology ,Iterative reconstruction ,01 natural sciences ,Convolutional neural network ,multilevel ,Image resolution ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,business.industry ,Deep learning ,Pattern recognition ,Panchromatic film ,double stream structure ,Feature (computer vision) ,General Earth and Planetary Sciences ,Artificial intelligence ,business ,Decoding methods - Abstract
In most practical applications of remote sensing images, high-resolution multispectral images are needed. Pansharpening aims to generate high-resolution multispectral (MS) images from the input of high spatial resolution single-band panchromatic (PAN) images and low spatial resolution multispectral images. Inspired by the remarkable results of other researchers in pansharpening based on deep learning, we propose a multilevel dense connection network with a feedback connection. Our network consists of four parts. The first part consists of two identical subnetworks to extract features from PAN and MS images. The second part is a multilevel feature fusion and recovery network, which is used to fuse images in the feature domain and to encode and decode features at different levels so that the network can fully capture different levels of information. The third part is a continuous feedback operation, which refines shallow features by feedback. The fourth part is an image reconstruction network. High-quality images are recovered by making full use of multistage decoding features through dense connections. Experiments on different satellite datasets show that our proposed method is superior to existing methods, through subjective visual evaluation and objective evaluation indicators. Compared with the results of other models, our results achieve significant gains on the multiple objective index values used to measure the spectral quality and spatial details of the generated image, namely spectral angle mapper (SAM), relative global dimensional synthesis error (ERGAS), and structural similarity (SSIM).
- Published
- 2021
40. Multispectral satellite imagery processing to recognize the archaeological features: The NW part of Mount Etna (Sicily, Italy)
- Author
-
Gabriele Fargione, Giuseppe Mussumeci, Alessio Candiano, Michele Mangiameli, and Andrea Gennaro
- Subjects
remote sensing ,archeology ,classification ,GIS ,pansharpening ,Remote sensing (archaeology) ,General Mathematics ,General Engineering ,Multispectral satellite imagery ,Mount ,Mathematics ,Remote sensing - Published
- 2019
41. Robust Band-Dependent Spatial-Detail Approaches for Panchromatic Sharpening
- Author
-
Gemine Vivone
- Subjects
Image fusion ,Computer science ,business.industry ,Multiresolution analysis ,Multispectral image ,pansharpening ,Pattern recognition ,component substitution (CS) ,Sharpening ,Spectral bands ,image fusion ,Panchromatic film ,Band-dependent spatial-detail (BDSD) ,remote sensing ,robust regression ,General Earth and Planetary Sciences ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Image resolution - Abstract
Pansharpening refers to the fusion of a multispectral (MS) image with a finer spectral resolution but coarser spatial resolution than a panchromatic (PAN) image. The classical pansharpening problem can be dealt with component substitution or multiresolution analysis techniques. One of the most notable approaches in the former class is the band-dependent spatial-detail (BDSD) method. It has been shown state-of-the-art performance, in particular, when the fusion of four band data sets is addressed. However, new sensors, such as the WorldView-2/-3 ones, usually acquire MS images with more than four spectral bands to be fused with the PAN image. The BDSD method has shown limitations in performance in these cases. Thus, in this paper, several BDSD-based approaches are provided to solve this issue getting a robustness of the BDSD with respect to the spectral bands to be fused. The experimental results conducted both at reduced and at full resolutions on four real data sets acquired by the IKONOS, the QuickBird, the WorldView-2, and the WorldView-3 sensors demonstrate the validity of the proposed approaches against the benchmark.
- Published
- 2019
42. Improving Hypersharpening for WorldView-3 Data
- Author
-
Stefano Baronti, Leonardo Santurri, and Massimo Selva
- Subjects
Computer science ,0211 other engineering and technologies ,Hyperspectral imaging ,pansharpening ,02 engineering and technology ,Geotechnical Engineering and Engineering Geology ,image fusion ,remote sensing ,Hypersharpening ,WorldView-3 ,Electrical and Electronic Engineering ,Algorithm ,Image resolution ,021101 geological & geomatics engineering - Abstract
In this letter, hypersharpening is analyzed in depth by investigating some weaknesses present in its formulation. It is shown that the key formula of the synthesized band variant can be simplified under certain circumstances. In addition, a novel fusion schema is proposed. As a result, the gain factor adopted to weight the injected detail is computed in a different way. This schema can be applied to fuse a wide range of hyperspectral and multispectral data. In this letter, its effectiveness is demonstrated by taking into account the characteristics of WorldView-3 data.
- Published
- 2019
43. Multispectral Image Fusion Using Fractional-Order Differential and Guided Filtering
- Author
-
Hui Fan, Genji Yuan, and Jinjiang Li
- Subjects
lcsh:Applied optics. Photonics ,Visual perception ,Computer science ,business.industry ,Multispectral image ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,lcsh:TA1501-1820 ,component substitution framework ,Filter (signal processing) ,Pansharpening ,Atomic and Molecular Physics, and Optics ,guided filter ,Panchromatic film ,Component (UML) ,Computer Science::Computer Vision and Pattern Recognition ,lcsh:QC350-467 ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Differential (infinitesimal) ,business ,Spatial analysis ,Image resolution ,fractional order differential operators ,lcsh:Optics. Light - Abstract
Remote sensing satellites can provide a large number of multispectral images. However, due to the limitations of optical sensors embedded in satellites, the spatial resolution of multispectral images is relatively low. Pansharpening aims to combine high-resolution panchromatic and multi-spectral images to generate high-resolution multi-spectral images. In this paper, we propose a pansharpening method based on a component substitution framework. We use fractional-order differential operators and guided filter to balance the spectral distortion and spatial information loss that occur when remote sensing image fusion. Fractional-order differentiation can better define the detailed map, and the guided filter can enhance the spectral information of the detailed map. Experiments show that the proposed method in this paper can better combine the spectral information and spatial information, as well as obtain satisfactory results in both subjective visual perception and objective object evaluation.
- Published
- 2019
44. No-Reference Quality Assessment for Pansharpened Images via Opinion-Unaware Learning
- Author
-
Bingzhong Zhou, Randi Fu, Yo-Sung Ho, Xiangchao Meng, and Feng Shao
- Subjects
General Computer Science ,Computer science ,Multispectral image ,0211 other engineering and technologies ,02 engineering and technology ,Pansharpening ,spatial distortion ,remote sensing ,Distortion ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,spectral distortion ,021101 geological & geomatics engineering ,Measure (data warehouse) ,Standard test image ,business.industry ,General Engineering ,Pattern recognition ,Spectral bands ,no-reference quality assessment ,Panchromatic film ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Artificial intelligence ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,business ,lcsh:TK1-9971 - Abstract
The high-quality pansharpened image with both high spatial resolution and high spectral fidelity is highly desirable in various applications. However, existing pansharpening methods may lead to spatial distortion and spectral distortion. To measure the degrees of distortion caused by the pansharpening methods, we conduct in-deep studies on the subjective and objective quality assessment of pansharpened images. We built a subjective database consisting of 360 images generated from 20 couples of panchromatic (PAN)/multispectral (MS) images using 18 pansharpening methods. Based on the database, we proposed a no-reference quality assessment method to blindly predict the quality of pansharpened images via opinion-unaware learning. The proposed method first extracted features from the MS images' spectral bands and typical information indexes which comprehensively reflect spatial distortion, spectral distortion, and the effects of pansharpening on applications. Based on the features extracted from the pristine MS image training dataset, a benchmark multivariate Gaussian (MVG) model is learned. The distance between the benchmark MVG and the MVG fitted on the test image is calculated to measure the quality. The experimental results show the superiority of our method on our database.
- Published
- 2019
45. Joint Spectral and Spatial Consistency Priors for Variational Pansharpening
- Author
-
Pengfei Liu
- Subjects
General Computer Science ,Computer science ,business.industry ,spectral and spatial consistency priors ,General Engineering ,Pattern recognition ,Pansharpening ,Prior probability ,Spatial consistency ,General Materials Science ,Artificial intelligence ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Electrical and Electronic Engineering ,business ,Joint (geology) ,lcsh:TK1-9971 ,variational model - Abstract
This paper proposes a new variational pansharpening model with joint spectral and spatial consistency priors, which aims to fuse a low resolution (LR) multispectral (MS) image and a high resolution (HR) panchromatic (Pan) image to produce a pan-sharpened HR MS image. Specifically, the proposed model combines three consistency terms into a unified variational framework, which are (1) Local spectral consistency fidelity term, which enforces the degradation relation-based local spectral consistency constraint between the HR MS and LR MS images; (2) Hessian feature-enforced spatial consistency prior term, which particularly models the Hessian feature consistency constraint between the HR MS and Pan images to enforce spatial consistency; and (3) Wavelet-based spectral-spatial consistency prior term, which models the consistency between the HR MS image and the constructed Wavelet-based matching image to enforce spectral-spatial consistency. Moreover, the proposed model is efficiently solved by designing an optimization algorithm under the forward-backward splitting framework. Finally, experiments on the QuickBird, Pleiades and GeoEye-1 satellite datasets systematically illustrate that the proposed method performs better spectral and spatial qualities than various compared methods.
- Published
- 2019
46. Pansharpening With Joint Local Low Rank Decomposition and Hierarchical Geometric Filtering
- Author
-
Min Wang, Chen Yang, Yuteng Gao, Chengtian Song, and Shuyuan Yang
- Subjects
General Computer Science ,Correlation coefficient ,Rank (linear algebra) ,Computer science ,Multispectral image ,0211 other engineering and technologies ,hierarchical geometric filtering ,Pansharpening ,02 engineering and technology ,Image (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Decomposition (computer science) ,General Materials Science ,joint local low-rank decomposition ,021101 geological & geomatics engineering ,business.industry ,General Engineering ,Pattern recognition ,spectral correlation coefficient ,Panchromatic film ,Computer Science::Computer Vision and Pattern Recognition ,020201 artificial intelligence & image processing ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,business ,Joint (audio engineering) ,lcsh:TK1-9971 - Abstract
Extracting matched details of the PANchromatic (PAN) image and injecting them into the MultiSpectral (MS) images, is very crucial in pansharpening. In this paper, a new pansharpening method based on Joint Local Low Rank Decomposition (JLLRD) and Hierarchical Geometric Filtering (HGF) is proposed. First, a cascaded geometric filtering is performed on the PAN and MS images, to extract their multi-scale directional details. Then a joint local low rank decomposition is developed to deduce low-rank and sparse components for injection. Finally, an adaptive injection rule based on spectral correlation coefficient, is designed to further reduce spectral distortion of the fused images. Several experiments are taken to investigate the performance of the proposed JLLRD-HGF method, and the results show that it can extract more accurate injection details and produce less spectral and spatial distortions than its counterparts.
- Published
- 2019
47. Spectral-Spatial Interaction Network for Multispectral Image and Panchromatic Image Fusion
- Author
-
Zihao Nie, Lihui Chen, Seunggil Jeon, and Xiaomin Yang
- Subjects
deep learning ,spectral-spatial interaction network ,spectral-spatial attention ,pansharpening ,General Earth and Planetary Sciences - Abstract
Recently, with the rapid development of deep learning (DL), an increasing number of DL-based methods are applied in pansharpening. Benefiting from the powerful feature extraction capability of deep learning, DL-based methods have achieved state-of-the-art performance in pansharpening. However, most DL-based methods simply fuse multi-spectral (MS) images and panchromatic (PAN) images by concatenating, which can not make full use of the spectral information and spatial information of MS and PAN images, respectively. To address this issue, we propose a spectral-spatial interaction Network (SSIN) for pansharpening. Different from previous works, we extract the features of PAN and MS, respectively, and then interact them repetitively to incorporate spectral and spatial information progressively. In order to enhance the spectral-spatial information fusion, we further propose spectral-spatial attention (SSA) module to yield a more effective spatial-spectral information transfer in the network. Extensive experiments on QuickBird, WorldView-4, and WorldView-2 images demonstrate that our SSIN significantly outperforms other methods in terms of both objective assessment and visual quality.
- Published
- 2022
48. A Local and Nonlocal Feature Interaction Network for Pansharpening
- Author
-
Junru Yin, Jiantao Qu, Le Sun, Wei Huang, and Qiqiang Chen
- Subjects
pansharpening ,deep learning ,Transformer ,feature fusion ,General Earth and Planetary Sciences - Abstract
Pansharpening based on deep learning (DL) has shown great advantages. Most convolutional neural network (CNN)-based methods focus on obtaining local features from multispectral (MS) and panchromatic (PAN) images, but ignore the nonlocal dependence on images. Therefore, Transformer-based methods are introduced to obtain long-range information on images. However, the representational capabilities of features extracted by CNN or Transformer alone are weak. To solve this problem, a local and nonlocal feature interaction network (LNFIN) is proposed in this paper for pansharpening. It comprises Transformer and CNN branches. Furthermore, a feature interaction module (FIM) is proposed to fuse different features and return to the two branches to enhance the representational capability of features. Specifically, a CNN branch consisting of multiscale dense modules (MDMs) is proposed for acquiring local features of the image, and a Transformer branch consisting of pansharpening Transformer modules (PTMs) is introduced for acquiring nonlocal features of the image. In addition, inspired by the PTM, a shift pansharpening Transformer module (SPTM) is proposed for the learning of texture features to further enhance the spatial representation of features. The LNFIN outperforms the state-of-the-art method experimentally on three datasets.
- Published
- 2022
49. An Evaluation of Pixel- and Object-Based Tree Species Classification in Mixed Deciduous Forests Using Pansharpened Very High Spatial Resolution Satellite Imagery
- Author
-
Martina Deur, Ivan Balenović, and Mateo Gašparović
- Subjects
010504 meteorology & atmospheric sciences ,Computer science ,Science ,Forest management ,object-based classification (OBIA) ,0211 other engineering and technologies ,pansharpening ,02 engineering and technology ,01 natural sciences ,Bayes' theorem ,WorldView-3 ,random forest ,pixel-based classification ,Satellite imagery ,Image resolution ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Carpinus betulus ,biology ,Pixel ,business.industry ,Pattern recognition ,15. Life on land ,biology.organism_classification ,Object (computer science) ,Random forest ,General Earth and Planetary Sciences ,Artificial intelligence ,business - Abstract
Quality tree species information gathering is the basis for making proper decisions in forest management. By applying new technologies and remote sensing methods, very high resolution (VHR) satellite imagery can give sufficient spatial detail to achieve accurate species-level classification. In this study, the influence of pansharpening of the WorldView-3 (WV-3) satellite imagery on classification results of three main tree species (Quercus robur L., Carpinus betulus L., and Alnus glutinosa (L.) Geartn.) has been evaluated. In order to increase tree species classification accuracy, three different pansharpening algorithms (Bayes, RCS, and LMVM) have been conducted. The LMVM algorithm proved the most effective pansharpening technique. The pixel- and object-based classification were applied to three pansharpened imageries using a random forest (RF) algorithm. The results showed a very high overall accuracy (OA) for LMVM pansharpened imagery: 92% and 96% for tree species classification based on pixel- and object-based approach, respectively. As expected, the object-based exceeded the pixel-based approach (OA increased by 4%). The influence of fusion on classification results was analyzed as well. Overall classification accuracy was improved by the spatial resolution of pansharpened images (OA increased by 7% for pixel-based approach). Also, regardless of pixel- or object-based classification approaches, the influence of the use of pansharpening is highly beneficial to classifying complex, natural, and mixed deciduous forest areas.
- Published
- 2021
50. Super Resolution Infrared Thermal Imaging Using Pansharpening Algorithms: Quantitative Assessment and Application to UAV Thermal Imaging
- Author
-
Julian Aguirre de Mata, Juan F. Prieto, Javier Raimundo, Serafín López-Cuervo Medina, and Ministerio de Economía y Competitividad
- Subjects
010504 meteorology & atmospheric sciences ,Computer science ,multispectral ,Multispectral image ,0211 other engineering and technologies ,1206.01 Construcción de Algoritmos ,Imagen térmica ,Espectroscopia infrarroja ,pansharpening ,super-resolution ,02 engineering and technology ,lcsh:Chemical technology ,01 natural sciences ,Biochemistry ,Alta resolución ,Field (computer science) ,Article ,Analytical Chemistry ,remote sensing ,2104.08 Satélites ,2209.18 Fotometría ,3301.04 Aeronaves ,Thermal ,Quantitative assessment ,thermal imaging ,lcsh:TP1-1185 ,Electrical and Electronic Engineering ,Instrumentation ,Satélites ,Patrimonio arquitectónico ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Drones ,2209.90 Tratamiento Digital. Imágenes ,Algoritmos ,3311.16 Instrumentos de Medida de la Temperatura ,Process (computing) ,2209.16 Instrumentos Fotográficos ,resolution enhancement ,Superresolution ,Atomic and Molecular Physics, and Optics ,Panchromatic film ,Termografía infrarroja ,Sensor térmico ,infrared ,2209.09 Radiación Infrarroja ,Imagen aérea ,Algorithm ,3305.34 Topografía de la Edificación - Abstract
The lack of high-resolution thermal images is a limiting factor in the fusion with other sensors with a higher resolution. Different families of algorithms have been designed in the field of remote sensors to fuse panchromatic images with multispectral images from satellite platforms, in a process known as pansharpening. Attempts have been made to transfer these pansharpening algorithms to thermal images in the case of satellite sensors. Our work analyses the potential of these algorithms when applied to thermal images from unmanned aerial vehicles (UAVs). We present a comparison, by means of a quantitative procedure, of these pansharpening methods in satellite images when they are applied to fuse high-resolution images with thermal images obtained from UAVs, in order to be able to choose the method that offers the best quantitative results. This analysis, which allows the objective selection of which method to use with this type of images, has not been done until now. This algorithm selection is used here to fuse images from thermal sensors on UAVs with other images from different sensors for the documentation of heritage, but it has applications in many other fields. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.