9 results on '"optical satellite image"'
Search Results
2. Imaging Parameters-Considered Slender Target Detection in Optical Satellite Images
- Author
-
Zhaoyang Huang, Feng Wang, Hongjian You, and Yuxin Hu
- Subjects
General Earth and Planetary Sciences ,optical satellite image ,imaging parameters ,slender targets ,object detection - Abstract
The existing slender target detection methods based on optical satellite images are greatly affected by the satellite perspective and the solar perspective. Due to limited data sources, it is difficult to implement a fully data-driven approach. This work introduces the imaging parameters of optical satellite images, which greatly reduces the influence of the satellite perspectives and the solar perspectives, and reduces the demand for the amount of data. We improve the oriented bounding box (OBB) detector based on faster R-CNN (region convolutional neural networks) and propose an imaging parameters-considered detector (IPC-Det) which is more suitable for our task. Specifically, in the first stage, the umbra and the shadow are extracted by horizontal bounding box (HBB), respectively, and then the matching of the umbra and the shadow is realized according to the imaging parameters. In the second stage, the paired umbra and shadow features are used to complete the classification and regression, and the target is obtained by OBB. In experiments, after introducing imaging parameters, our detection accuracy is improved by 3.9% (up to 87.5%), proving that this work is a successful attempt to introduce imaging parameters for slender target detection.
- Published
- 2022
- Full Text
- View/download PDF
3. Saliency Guided DNL-Yolo for Optical Remote Sensing Images for Off-Shore Ship Detection
- Author
-
Jian Guo, Shuchen Wang, and Qizhi Xu
- Subjects
Fluid Flow and Transfer Processes ,Process Chemistry and Technology ,optical satellite image ,ship detection ,convolutional neural networks ,deep learning ,General Engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Materials Science ,Instrumentation ,Computer Science Applications - Abstract
The complexity of changeable marine backgrounds makes ship detection from satellite remote sensing images a challenging task. The ubiquitous interference of cloud and fog led to missed detection and false-alarms when using imagery-based optical satellite remote sensing. An off-shore ship detection method with scene classification and a saliency-tuned YOLONet is proposed to solve this problem. First, the image blocks are classified into four categories by a density peak clustering algorithm (DPC) according to their grayscale histograms, i.e., cloudless areas, thin cloud areas, scattered clouds areas, and thick cloud areas. Secondly, since the ships can be regarded as salient objects in a marine background, the spectral residue saliency detection method is used to extract prominent targets from different image blocks. Finally, the saliency tuned YOLOv4 network is designed to quickly and accurately detect ships from different marine backgrounds. We validated the proposed method using more than 2000 optical remote sensing images from the GF-1 satellite. The experimental results demonstrated that the proposed method obtained a better detection performance than other state-of-the-art methods.
- Published
- 2022
- Full Text
- View/download PDF
4. Real-Time Moving Ship Detection from Low-Resolution Large-Scale Remote Sensing Image Sequence
- Author
-
Yu, Jiyang, Huang, Dan, Shi, Xiaolong, Li, Wenjie, and Wang, Xianjie
- Subjects
Fluid Flow and Transfer Processes ,ship detection ,Process Chemistry and Technology ,optical satellite image ,convolutional neural networks ,General Engineering ,deep learning ,General Materials Science ,Instrumentation ,Computer Science Applications - Abstract
Optical remote sensing ship target detection has become an essential means of ocean supervision, coastal defense, and frontier defense. Accurate, effective, fast, and real-time remote sensing data processing is the critical technology in this field. This paper proposes a real-time detection algorithm for moving targets in low-resolution wide-area remote sensing images, which includes four steps: pre-screening, simplified HOG feature identification, sequence correlation identification, and facilitated Yolo identification. It can effectively detect and track targets in low-resolution sequence data. Firstly, iterative morphological processing was used to improve the contrast of low-resolution ship target profile edge features compared with the sea surface background. Next, the target area after adaptive segmentation was used to eliminate false alarms. As a result, the invalid background information of extensive comprehensive data was quickly eliminated. Then, support vector machine classification of S-HOG feature was carried out for suspected targets, and interference such as islands and reefs, broken clouds, and waves were eliminated according to the shape characteristics of ship targets. The method of multi-frame data association and searching for adjacent target information between frames was adopted to eliminate the interference of static targets and broken clouds with similar contours. Finally, the sequential marks were further trained and learned, and further false alarm elimination was completed based on the clipped Yolo network. Compared with the traditional Yolo Tiny V2/V3 series network, this method had higher computational speed and better detection performance. The F1 number of detection results was increased by 3%, and the calculation time was reduced by 66%.
- Published
- 2023
- Full Text
- View/download PDF
5. STC-Det: A Slender Target Detector Combining Shadow and Target Information in Optical Satellite Images
- Author
-
Feng Wang, Hongjian You, Zhaoyang Huang, and Yuxin Hu
- Subjects
Matching (statistics) ,Computer science ,business.industry ,Science ,Detector ,Perspective (graphical) ,object detection ,Object detection ,Variable (computer science) ,optical satellite image ,shadow ,slender targets ,Shadow ,General Earth and Planetary Sciences ,Satellite ,Computer vision ,Artificial intelligence ,business ,Complement (set theory) - Abstract
Object detection has made great progress. However, due to the unique imaging method of optical satellite remote sensing, the detection of slender targets is still insufficient. Specifically, the perspective of optical satellites is small, and the characteristics of slender targets are severely lost during imaging, resulting in insufficient detection task information; at the same time, the appearance of slender targets in the image is greatly affected by the satellite perspective, which is likely to cause insufficient generalization capabilities of conventional detection models. In response to these two points, we have made some improvements. First, in this paper, we introduce the shadow as auxiliary information to complement the trunk features of the target lost in imaging. Second, to reduce the impact of satellite perspective on imaging, in this paper, we use the characteristic that shadow information is not affected by satellite perspective to design STC-Det. STC-Det treats the shadow and the target as two different types of targets and uses the shadow information to assist the detection, reducing the impact of the satellite perspective on detection. Among them, in order to improve the performance of STC-Det, we propose an automatic matching method (AMM) of shadow and target and a feature fusion method (FFM). Finally, this paper proposes a new method to calculate the heatmaps of detectors, which verifies the effectiveness of the proposed network in a visual way. Experiments show that when the satellite perspective is variable, the precision of STC-Det is increased by 1.7%, and when the satellite perspective is small, the precision of STC-Det is increased by 5.2%.
- Published
- 2021
6. VHRShips: An Extensive Benchmark Dataset for Scalable Deep Learning-Based Ship Detection Applications
- Author
-
Serdar Kızılkaya, Ugur Alganci, and Elif Sertel
- Subjects
Geography, Planning and Development ,Earth and Planetary Sciences (miscellaneous) ,Computers in Earth Sciences ,deep learning ,optical satellite image ,ship classification ,end-to-end approach ,dataset - Abstract
The classification of maritime boats and ship targets using optical satellite imagery is a challenging subject. This research introduces a unique and rich ship dataset named Very High-Resolution Ships (VHRShips) from Google Earth images, which includes diverse ship types, different ship sizes, several inshore locations, and different data acquisition conditions to improve the scalability of ship detection and mapping applications. In addition, we proposed a deep learning-based multi-stage approach for ship type classification from very high resolution satellite images to evaluate the performance of the VHRShips dataset. Our “Hierarchical Design (HieD)” approach is an end-to-end structure that allows the optimization of the Detection, Localization, Recognition, and Identification (DLRI) stages, independently. We focused on sixteen parent ship classes for the DLR stages, and specifically considered eight child classes of the navy parent class at the identification stage. We used the Xception network in the DRI stages and implemented YOLOv4 for the localization stage. Individual optimization of each stage resulted in F1 scores of 99.17%, 94.20%, 84.08%, and 82.13% for detection, recognition, localization, and identification, respectively. The end-to-end implementation of our proposed approach resulted in F1 scores of 99.17%, 93.43%, 74.00%, and 57.05% for the same order. In comparison, end-to-end YOLOv4 yielded F1-scores of 99.17%, 86.59%, 68.87%, and 56.28% for DLRI, respectively. We achieved higher performance with HieD than YOLOv4 for localization, recognition, and identification stages, indicating the usability of the VHRShips dataset in different detection and classification models. In addition, the proposed method and dataset can be used as a benchmark for further studies to apply deep learning on large-scale geodata to boost GeoAI applications in the maritime domain.
- Published
- 2022
- Full Text
- View/download PDF
7. Time Series GIS Map Dataset of Demolished Buildings in Mashiki Town after the 2016 Kumamoto, Japan Earthquake
- Author
-
Masashi Matsuoka and Yuzuru Kushiyama
- Subjects
Geographic information system ,010504 meteorology & atmospheric sciences ,Visual interpretation ,Estimated Weight ,0211 other engineering and technologies ,binary ,02 engineering and technology ,01 natural sciences ,recovery ,optical satellite image ,mapping ,lcsh:Science ,labeling ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Training set ,business.industry ,Disaster waste ,Pleiades ,Field survey ,GIS ,post disaster ,visual interpretation ,Geography ,SPOT ,General Earth and Planetary Sciences ,lcsh:Q ,business ,Cartography ,Post disaster ,Test data - Abstract
After a large-scale disaster, many damaged buildings are demolished and treated as disaster waste. Though the weight of disaster waste was estimated two months after the 2016 earthquake in Kumamoto, Japan, the estimated weight was significantly different from the result when the disaster waste disposal was completed in March 2018. The amount of disaster waste generated is able to be estimated by an equation by multiplying the total number of severely damaged and partially damaged buildings by the coefficient of generated weight per building. We suppose that the amount of disaster waste would be affected by the conditions of demolished buildings, namely, the areas and typologies of building structures, but this has not yet been clarified. Therefore, in this study, we aimed to use geographic information system (GIS) map data to create a time series GIS map dataset with labels of demolished and remaining buildings in Mashiki town for the two-year period prior to the completion of the disaster waste disposal. We used OpenStreetMap (OSM) data as the base data and time series SPOT images observed in the two years following the Kumamoto earthquake to label all demolished and remaining buildings in the GIS map dataset. To effectively label the approximately 16,000 buildings in Mashiki town, we calculated an indicator that shows the possibility of the buildings to be classified as the remaining and demolished buildings from a change of brightness in SPOT images. We classified 5701 demolished buildings from 16,106 buildings, as of March 2018, by visual interpretation of the SPOT and Pleiades images with reference to this indicator. We verified that the number of demolished buildings was almost the same as the number reported by Mashiki municipality. Moreover, we assessed the accuracy of our proposed method: The F-measure was higher than 0.9 using the training dataset, which was verified by a field survey and visual interpretation, and included the labels of the 55 demolished and 55 remaining buildings. We also assessed the accuracy of the proposed method by applying it to all the labels in the OSM dataset, but the F-measure was 0.579. If we applied test data including balanced labels of the other 100 demolished and 100 remaining buildings, which were other than the training data, the F-measure was 0.790 calculated from the SPOT image of 25 March 2018. Our proposed method performed better for the balanced classification but not for imbalanced classification. We studied the examples of image characteristics of correct and incorrect estimation by thresholding the indicator.
- Published
- 2019
8. WaterNet: A Convolutional Neural Network for Chlorophyll-a Concentration Retrieval
- Author
-
Chao-Hung Lin, Ariel C. Blanco, Muhammad Aldila Syariz, Lalu Muhamad Jaelani, and Manh Van Nguyen
- Subjects
010504 meteorology & atmospheric sciences ,Computer science ,Feature extraction ,0211 other engineering and technologies ,Sample (statistics) ,02 engineering and technology ,01 natural sciences ,Convolutional neural network ,optical satellite image ,Radiative transfer ,lcsh:Science ,Spatial analysis ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Artificial neural network ,business.industry ,Pattern recognition ,chlorophyll-a concentration retrieval ,Global optimum ,artificial neural network ,General Earth and Planetary Sciences ,lcsh:Q ,Stage (hydrology) ,Artificial intelligence ,business - Abstract
The retrieval of chlorophyll-a (Chl-a) concentrations relies on empirical or analytical analyses, which generally experience difficulties from the diversity of inland waters in statistical analyses and the complexity of radiative transfer equations in analytical analyses, respectively. Previous studies proposed the utilization of artificial neural networks (ANNs) to alleviate these problems. However, ANNs do not consider the problem of insufficient in situ samples during model training, and they do not fully utilize the spatial and spectral information of remote sensing images in neural networks. In this study, a two-stage training is introduced to address the problem regarding sample insufficiency. The neural network is pretrained using the samples derived from an existing Chl-a concentration model in the first stage, and the pretrained model is refined with in situ samples in the second stage. A novel convolutional neural network for Chl-a concentration retrieval called WaterNet is proposed which utilizes both spectral and spatial information of remote sensing images. In addition, an end-to-end structure that integrates feature extraction, band expansion, and Chl-a estimation into the neural network leads to an efficient and effective Chl-a concentration retrieval. In experiments, Sentinel-3 images with the same acquisition days of in situ measurements over Laguna Lake in the Philippines were used to train and evaluate WaterNet. The quantitative analyses show that the two-stage training is more likely than the one-stage training to reach the global optimum in the optimization, and WaterNet with two-stage training outperforms, in terms of estimation accuracy, related ANN-based and band-combination-based Chl-a concentration models.
- Published
- 2020
- Full Text
- View/download PDF
9. RFM USAGE FOR GEOREFERENCING OF HIGH RESOLUTION SATELLITE IMAGES
- Author
-
Hüseyin Topan
- Subjects
IKONOS ,georeferencing accuracy ,RFM ,lcsh:T ,lcsh:Motor vehicles. Aeronautics. Astronautics ,optical satellite image ,Zonguldak ,lcsh:TL1-4050 ,lcsh:Technology ,georeferencing - Abstract
Georeferencing is a mandatory issue at many applications where the satellite images are used. Georeferencing is based on a transformation between images and objects coordinate systems, and parametric, semi parametric and non-parametric mathematical models are preferred for this issue. A semi-parametric model, i.e. sensor-based RFM (Rational Functional Model) suggested by the OGC (Open Geospatial Consortium) is subjected in this paper. Some theoretical background about the general equation, the estimation of coefficients, the distortions carried and their removing, the simplification of RFMs thanks to the special characteristics of coefficients and also the various adjustment models will be presented at first. Then, the georeferencing accuracies of a mono IKONOS panchromatic image covering Zonguldak test field having an undulating and mountainous topography will be presented using 22 GCPs via a computation package called GeoFigcon derived in Matlab environment by the author. If a bias compensation is applied, the accuracies are when 1st degree RFM is preferred.
- Published
- 2013
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.