107 results on '"Jiann Yeou Rau"'
Search Results
2. Integrating UAV and Ground Panoramic Images for Point Cloud Analysis of Damaged Building.
- Author
-
Jyun-Ping Jhan, Norman Kerle, and Jiann-Yeou Rau
- Published
- 2022
- Full Text
- View/download PDF
3. Multitemporal UAV Photogrammetry For Sandbank Morphological Change Analysis: Evaluations of Camera Calibration Methods, Co-Registration Strategies, and the Reconstructed DSMs.
- Author
-
Ruli Andaru, Jiann-Yeou Rau, Laurence Zsu-Hsin Chuang, and Chia-Hung Jen
- Published
- 2022
- Full Text
- View/download PDF
4. Determination of potential secondary lahar hazard areas based on pre-and post-eruption UAV DEMs: Automatic identification of initial lahar starting points and supplied lahar volume
- Author
-
Ruli Andaru, Jiann-Yeou Rau, and Ardy Setya Prayoga
- Subjects
Secondary lahar ,Lahar starting point ,Supplied lahar volume ,UAV DEM ,Physical geography ,GB3-5030 ,Environmental sciences ,GE1-350 - Abstract
Secondary lahars, generated after volcanic eruptions, may pose significant threats to life and infrastructure. Secondary lahars typically develop from ash deposits and other volcanic debris that remobilize downstream via intense rainfall. The lahar inundation zone after eruptions must be predicted to minimize the impact. This prediction can be modeled based on digital elevation models (DEMs) and two parameters associated with lahar simulations: the lahar starting point (LSP), which indicates the potential locations at which a lahar flow may initiate, and supplied lahar volume (SLV), which is the lahar volume corresponding to each LSP. These parameters are typically determined by assumptions based on past lahar events, which may be unrealistic and often misinterpreted in the inundation prediction. To address this problem, this paper proposes an automated method to estimate the LSP and SLV based on pre-and post-eruption DEMs generated by unmanned aerial vehicle (UAV) images and simulate the inundation zone using the LAHARZ model. The study site is located in the southeast region of Mount Agung (Indonesia), and the objective is to mitigate the potential secondary lahar hazard after the 2017–2019 eruption crisis. Results show that the parameter estimations using the high-resolution UAV DEM and LAHARZ produce a realistic lahar simulation, with a satisfactory similarity of 82%, as verified against the lahar footprint. Moreover, we compare the results with those obtained using TerraSAR-X DEM and demonstrate the importance of using a detailed UAV DEM to avoid underestimating the lahar runout and ensure that the simulated inundation zones mimic real lahars.
- Published
- 2022
- Full Text
- View/download PDF
5. A Generalized Tool for Accurate and Efficient Image Registration of UAV Multi-lens Multispectral Cameras by N-SURF Matching.
- Author
-
Jyun-Ping Jhan and Jiann-Yeou Rau
- Published
- 2021
- Full Text
- View/download PDF
6. Use of principal components of UAV-acquired narrow-band multispectral imagery to map the diverse low stature vegetation fAPAR
- Author
-
Cho-ying Huang, Hsin-Lin Wei, Jiann-Yeou Rau, and Jyun-Ping Jhan
- Subjects
chemometrics ,leaf area index ,minimca ,precision agriculture ,productivity ,Mathematical geography. Cartography ,GA1-1776 ,Environmental sciences ,GE1-350 - Abstract
The fraction of absorbed photosynthetically active radiation (fAPAR) is an important plant physiological index that is used to assess the ability of vegetation to absorb PAR, which is utilized to sequester carbon in the atmosphere. This index is also important for monitoring plant health and productivity, which has been widely used to monitor low stature crops and is a crucial metric for food security assessment. The fAPAR has been commonly correlated with a greenness index derived from spaceborne optical imagery, but the relatively coarse spatial or temporal resolution may prohibit its application on complex land surfaces. In addition, the relationships between fAPAR and remotely sensed greenness data may be influenced by the heterogeneity of canopies. Multispectral and hyperspectral unmanned aerial vehicle (UAV) imaging systems, conversely, can provide several spectral bands at sub-meter resolutions, permitting precise estimation of fAPAR using chemometrics. However, the data pre-processing procedures are cumbersome, which makes large-scale mapping challenging. In this study, we applied a set of well-verified image processing protocols and a chemometric model to a lightweight, frame-based and narrow-band (10 nm) UAV imaging system to estimate the fAPAR over a relatively large cultivated land area with a variety of low stature vegetation of tropical crops along with native and non-native grasses. A principal component regression was applied to 12 bands of spectral reflectance data to minimize the collinearity issue and compress the data variation. Stepwise regression was employed to reduce the data dimensionality, and the first, third and fifth components were selected to estimate the fAPAR. Our results indicate that 77% of the fAPAR variation was explained by the model. All bands that are sensitive to foliar pigment concentrations, canopy structure and/or leaf water content may contribute to the estimation, especially those located close to (720 nm) or within (750 nm and 780 nm) the near-infrared spectral region. This study demonstrates that this narrow-band frame-based UAV system would be useful for vegetation monitoring. With proper pre-flight planning and hardware improvement, the mapping of a narrow-band multispectral UAV system could be comparable to that of a manned aircraft system.
- Published
- 2019
- Full Text
- View/download PDF
7. Development of a Large-Format UAS Imaging System With the Construction of a One Sensor Geometry From a Multicamera Array.
- Author
-
Jiann-Yeou Rau, Jyun-Ping Jhan, and Yi-Tang Li
- Published
- 2016
- Full Text
- View/download PDF
8. Underwater 3D Rigid Object Tracking and 6-DOF Estimation: A Case Study of Giant Steel Pipe Scale Model Underwater Installation.
- Author
-
Jyun-Ping Jhan, Jiann-Yeou Rau, and Chih-Ming Chou
- Published
- 2020
- Full Text
- View/download PDF
9. Analysis of Oblique Aerial Images for Land Cover and Point Cloud Classification in an Urban Environment.
- Author
-
Jiann-Yeou Rau, Jyun-Ping Jhan, and Ya-Ching Hsu
- Published
- 2015
- Full Text
- View/download PDF
10. Semiautomatic Object-Oriented Landslide Recognition Scheme From Multisensor Optical Imagery and DEM.
- Author
-
Jiann-Yeou Rau, Jyun-Ping Jhan, and Ruey-Juin Rau
- Published
- 2014
- Full Text
- View/download PDF
11. Reconstruction of Building Models with Curvilinear Boundaries from Laser Scanner and Aerial Imagery.
- Author
-
Liang-Chien Chen, Tee-Ann Teo, Chi-Heng Hsieh, and Jiann-Yeou Rau
- Published
- 2006
- Full Text
- View/download PDF
12. LOD Generation for 3D Polyhedral Building Model.
- Author
-
Jiann-Yeou Rau, Liang-Chien Chen, Fuan Tsai, Kuo-Hsin Hsiao, and Wei-Chen Hsu
- Published
- 2006
- Full Text
- View/download PDF
13. Integration of GPS, GIS and Photogrammetry for Texture Mapping in Photo-Realistic City Modeling.
- Author
-
Jiann-Yeou Rau, Tee-Ann Teo, Liang-Chien Chen, Fuan Tsai, Kuo-Hsin Hsiao, and Wei-Chen Hsu
- Published
- 2006
- Full Text
- View/download PDF
14. Reconstruction of Complex Buildings using LIDAR and 2D Maps.
- Author
-
Tee-Ann Teo, Jiann-Yeou Rau, Liang-Chien Chen, Jin-King Liu, and Wei-Chen Hsu
- Published
- 2006
- Full Text
- View/download PDF
15. Automatic Generation of Pseudo Continuous LoDs for 3D Polyhedral Building Model.
- Author
-
Jiann-Yeou Rau, Liang-Chien Chen, Fuan Tsai, Kuo-Hsin Hsiao, and Wei-Chen Hsu
- Published
- 2006
- Full Text
- View/download PDF
16. Disaster detection and damage estimation using satellite imagery and land-use information.
- Author
-
Jiann-Yeou Rau, Liang-Chien Chen, Cindy Tseng, Dong-Hsiung Wu, and Min-Huo Xie
- Published
- 2005
- Full Text
- View/download PDF
17. Building reconstruction from LIDAR data and aerial imagery.
- Author
-
Liang-Chien Chen, Tee-Ann Teo, Jiann-Yeou Rau, Jin-King Liu, and Wei-Chen Hsu
- Published
- 2005
- Full Text
- View/download PDF
18. A cost-effective strategy for multi-scale photo-realistic building modeling and web-based 3-D GIS applications in real estate.
- Author
-
Jiann-Yeou Rau and Chen-Kuang Cheng
- Published
- 2013
- Full Text
- View/download PDF
19. A Generalized Tool for Accurate and Efficient Image Registration of UAV Multi-lens Multispectral Cameras by N-SURF Matching
- Author
-
J. P. Jhan and Jiann Yeou Rau
- Subjects
Atmospheric Science ,Matching (statistics) ,Image matching ,multispectral (MS) image ,Pixel ,QC801-809 ,business.industry ,Computer science ,Distortion (optics) ,Geophysics. Cosmic physics ,Visual comparison ,Feature extraction ,Multispectral image ,Image registration ,Ocean engineering ,image registration ,Feature (computer vision) ,Computer vision ,Artificial intelligence ,Computers in Earth Sciences ,business ,multispectral camera (MSC) ,TC1501-1800 - Abstract
The original multispectral (MS) images obtained from multi-lens multispectral cameras (MSCs) have significant misregistration errors, which require image registration for precise spectral measurement. However, due to the nonlinearity intensity differences among MS images, performing image matching is difficult to find sufficient correct matches (CMs) for image registration, and results in a complex coarse-to-fine solution. Based on the modification of speed-up robust feature (SURF), we proposed a normalized SURF (N-SURF) that can significantly increase the amount of CMs among different pairs of MS images and make one-step image registration possible. In this study, we first introduce N-SURF and adopt different MS datasets acquired from three representative MSCs (MCA-12, Altum, and Sequoia) to evaluate its matching ability. Meanwhile, we utilized three image transformation models—affine transform (AT), projective transform (PT), and an extended projective transform (EPT) to correct the misregistration errors of MSCs and evaluate their co-registration correctness. The results show that N-SURF can obtain 6–20 times more CMs than SURF and can successfully match all pairs of MS images, while SURF failed in the cases of significant spectral differences. Moreover, visual comparison, accuracy assessment, and residual analysis show that EPT can more accurately correct the viewpoint and lens distortion differences of MSCs than AT and PT, and it can obtain co-registration accuracy of 0.2–0.4 pixels. Subsequently, using the successful N-SURF matching and EPT model, we developed an automatic MS image registration tool that is suitable for various multilens MSCs.
- Published
- 2021
- Full Text
- View/download PDF
20. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration.
- Author
-
Jiann-Yeou Rau and Po-Chia Yeh
- Published
- 2012
- Full Text
- View/download PDF
21. Direct Sensor Orientation of a Land-Based Mobile Mapping System.
- Author
-
Jiann-Yeou Rau, Ayman F. Habib, Ana Paula Kersting, Kai-Wei Chiang, Ki-In Bang, Yi-Hsing Tseng, and Yu-Hua Li
- Published
- 2011
- Full Text
- View/download PDF
22. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration
- Author
-
Po-Chia Yeh and Jiann-Yeou Rau
- Subjects
image-based 3D modeling ,multi-image matching ,multi-camera framework ,Chemical technology ,TP1-1185 - Abstract
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.
- Published
- 2012
- Full Text
- View/download PDF
23. Direct Sensor Orientation of a Land-Based Mobile Mapping System
- Author
-
Yu-Hua Li, Yi-Hsing Tseng, Ki-In Bang, Kai-Wei Chiang, Ana P. Kersting, Ayman F. Habib, and Jiann-Yeou Rau
- Subjects
Mobile Mapping Systems ,direct sensor orientation ,camera calibration ,direct georeferencing ,mounting parameters ,Chemical technology ,TP1-1185 - Abstract
A land-based mobile mapping system (MMS) is flexible and useful for the acquisition of road environment geospatial information. It integrates a set of imaging sensors and a position and orientation system (POS). The positioning quality of such systems is highly dependent on the accuracy of the utilized POS. This limitation is the major drawback due to the elevated cost associated with high-end GPS/INS units, particularly the inertial system. The potential accuracy of the direct sensor orientation depends on the architecture and quality of the GPS/INS integration process as well as the validity of the system calibration (i.e., calibration of the individual sensors as well as the system mounting parameters). In this paper, a novel single-step procedure using integrated sensor orientation with relative orientation constraint for the estimation of the mounting parameters is introduced. A comparative analysis between the proposed single-step and the traditional two-step procedure is carried out. Moreover, the estimated mounting parameters using the different methods are used in a direct geo-referencing procedure to evaluate their performance and the feasibility of the implemented system. Experimental results show that the proposed system using single-step system calibration method can achieve high 3D positioning accuracy.
- Published
- 2011
- Full Text
- View/download PDF
24. Dynamics Monitoring and Disaster Assessment for Watershed Management Using Time-Series Satellite Images.
- Author
-
Jiann-Yeou Rau, Liang-Chien Chen, Jin-King Liu, and Tong-Hsiung Wu
- Published
- 2007
- Full Text
- View/download PDF
25. Fast orthorectification for satellite images using patch backprojection.
- Author
-
Liang-Chien Chen, Tee-Ann Teo, and Jiann-Yeou Rau
- Published
- 2003
- Full Text
- View/download PDF
26. LAVA DOME CHANGES DETECTION AT AGUNG MOUNTAIN DURING HIGH LEVEL OF VOLCANIC ACTIVITY USING UAV PHOTOGRAMMETRY
- Author
-
Jiann Yeou Rau and Ruli Andaru
- Subjects
lcsh:Applied optics. Photonics ,geography ,geography.geographical_feature_category ,010504 meteorology & atmospheric sciences ,lcsh:T ,0211 other engineering and technologies ,Elevation ,Point cloud ,lcsh:TA1501-1820 ,Lava dome ,02 engineering and technology ,Geodesy ,lcsh:Technology ,01 natural sciences ,Photogrammetry ,Volcano ,Impact crater ,lcsh:TA1-2040 ,Interferometric synthetic aperture radar ,lcsh:Engineering (General). Civil engineering (General) ,Digital elevation model ,Geology ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences - Abstract
Lava dome changes detection during increasingly high volcanic activity are essential for hazard assessment purposes. However, it is challenging to conduct direct field measurement due to safety reason. Here, we investigate the lava dome changes of Mount Agung in Indonesia during the highest level of volcanic activity. On 22 September 2017, the rumbling and seismic activity in this volcano started increasing to the highest level for a period of time. We afterwards collected image data at lava dome area by using UAV over this time period. To accomplish the goal of change detection, we assembled and developed a fixed-wing UAV platform, i.e. Buffalo FX-79 to acquire images of Mount Agung whose elevation is roughly 3,142 m above sea level. We acquired the UAV images on two dates, i.e. Oct 19 and Oct 21 of 2017. Due to an exclusion zone surround the volcano, we could only operate the UAV at 20 km distance from the crater. With these data set, we produced three-dimensional point clouds, high-resolution Digital Elevation Model and orthophoto by using Structure from Motion (SfM) and Multi View Stereo (SfM-MVS) technique with Photoscan Pro software. From orthophoto data, we found two fluid areas at the crater's surface in NE direction (4,375.9 sq-m) and SE direction (3,749.8 sq-m). We also detected a fumarole which emitted steam and gases in the eastern part that continued for several days. In order to reveal the changes in lava dome surface, we used DEM to create cross-section profile. After that, we applied cloud to cloud comparison (C2C) algorithm to calculate the difference of lava dome based on two data set of point clouds and compared it with interferometric result from Sentinel-1A data. The data from the Sentinel-1A satellite (15 Oct – 27 Oct 2017) were processed to obtain the interferogram image of Mount Agung. This research therefore demonstrates a potential method to detect lava dome changes during high level of volcanic activity with photogrammetric methods by using UAV images. Within only two days the data were successfully acquired. From the DEM data and cross-section profile between two data set, we noticed that no significant surface change was found around the lava dome surface. Moreover, we also found that there was no significant lava dome changes and vertical displacement during these two time periods as the point cloud comparison and distance result. The average of difference distance is 2.27 cm with a maximal and minimal displacement of 255 cm and 0.37 cm respectively. This result was then validated by using InSAR Sentinel that showed small displacement, i.e 6.88 cm. It indicated that UAV photogrammetry showed a good performance to detect surface changes in centimeter fraction.
- Published
- 2019
- Full Text
- View/download PDF
27. LANDSLIDE DEFORMATION MONITORING BY THREE-CAMERA IMAGING SYSTEM
- Author
-
Jiann Yeou Rau, J. P. Jhan, and Ruli Andaru
- Subjects
lcsh:Applied optics. Photonics ,010504 meteorology & atmospheric sciences ,lcsh:T ,0211 other engineering and technologies ,Elevation ,Point cloud ,lcsh:TA1501-1820 ,Terrain ,Landslide ,02 engineering and technology ,01 natural sciences ,lcsh:Technology ,Displacement (vector) ,Deformation monitoring ,Photogrammetry ,lcsh:TA1-2040 ,lcsh:Engineering (General). Civil engineering (General) ,Image resolution ,Geology ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Remote sensing - Abstract
Landslide deformation is a critical issue for WuWanZai as it will affect the road safety and cause transportation problem. Since the relief of this area is about 400 meters with an area of tens of hectares, we use an unmanned helicopter equipped with a three-camera imaging system to acquire high spatial resolution images in order to measure detail terrain variation. The unmanned helicopter can fly according to terrain slope to obtain 1–3 cm spatial resolution images. The acquired three-camera images are stitched into one perspective image in advance to construct a large format virtual image with a frame size of 34 mm by 78 mm and a FOV of 53° × 97°. Integrating ground control points that were measure by static GNSS continuous observation, we conduct aerial triangulation and dense point cloud generation by PhotoscanPro. We have acquired six dataset of UAV images since April 20, 2018. Then, we have conducted cloud-to-cloud distance calculation, DSM elevation difference calculation, ortho-image change analysis, photogrammetric points and GNSS stations displacement analysis, etc. In the end, from photogrammetric point displacement analysis, we have detected 1.6 meters displacement around the fourth curve of WuWanZai due to a heavy rainfall occurred at June 20. Based on the cloud-to-cloud distance analysis and DSM elevation difference results, we have observed more than 5 meters of height difference at the landslide area due to another heavy rainfall happened at Oct. 23–24. Experimental results demonstrate that by using the proposed UAV and three-camera imaging system can effectively detect landslide deformation in high accuracy.
- Published
- 2019
28. Integrating UAV and Ground Panoramic Images for Point Cloud Analysis of Damaged Building
- Author
-
Jyun Ping Jhan, Norman Kerle, Jiann Yeou Rau, UT-I-ITC-4DEarth, Faculty of Geo-Information Science and Earth Observation, and Department of Earth Systems Analysis
- Subjects
Data collection ,business.industry ,Computer science ,media_common.quotation_subject ,earthquake-damaged building ,Point cloud ,Rapid processing ,22/2 OA procedure ,Usability ,3-D point clouds (3DPCs) ,Geotechnical Engineering and Engineering Geology ,panoramic image ,Image stitching ,Sky ,unmanned aerial vehicle (UAV) ,ITC-ISI-JOURNAL-ARTICLE ,Range (statistics) ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,media_common - Abstract
The effectiveness of damaged building investigation relies on rapid data collection, while jointly applying an unmanned aerial vehicle (UAV) and a backpack panoramic imaging system can quickly and comprehensively record the damage status. Meanwhile, integrating them for generating complete 3-D point clouds (3DPCs) is important for further assisting the 3-D measurement of the damaged areas. During the 2016 Meinong earthquake (Taiwan), the system collected multiview aerial images (MVAIs) and ground panoramic images of two collapsed buildings. However, due to the spatial offsets of the spherical camera result in nonideal panoramic images (NIPIs), an appropriate spherical radius has to be chosen to reduce the distance-related stitching errors. In order to evaluate the impact of using NIPIs for 3-D mapping, the geometric accuracy of the 3-D scene reconstruction (3DSR) and usability of the 3DPCs were assessed. This study introduces the stitching errors of panoramic images, uses sky masks for successful 3DSR, and obtains clean point clouds. It then analyzes the usability of point clouds that were obtained from only NIPIs, only MVAIs, and their integration. The analysis shows that NIPIs have more rapid processing efficiency than their unstitched original images and can increase the completeness of point clouds at the building’s lower floor, while MVAIs can reduce the stitching errors of NIPIs to an acceptable range. Therefore, integrating both images is necessary to achieve rapid and complete point cloud generation.
- Published
- 2021
29. A unified solution for digital terrain model and orthoimage generation from SPOT stereopairs.
- Author
-
Liang-Chien Chen and Jiann-Yeou Rau
- Published
- 1993
- Full Text
- View/download PDF
30. Robust and adaptive band-to-band image transform of UAS miniature multi-lens multispectral camera
- Author
-
Norbert Haala, Jyun Ping Jhan, and Jiann Yeou Rau
- Subjects
010504 meteorology & atmospheric sciences ,Remote sensing application ,business.industry ,Computer science ,Multispectral image ,0211 other engineering and technologies ,Hyperspectral imaging ,Image processing ,02 engineering and technology ,Camera array ,01 natural sciences ,Atomic and Molecular Physics, and Optics ,Computer Science Applications ,law.invention ,Image (mathematics) ,Lens (optics) ,law ,Computer vision ,Precision agriculture ,Artificial intelligence ,Computers in Earth Sciences ,business ,Engineering (miscellaneous) ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences - Abstract
Utilizing miniature multispectral (MS) or hyperspectral (HS) cameras by mounting them on an Unmanned Aerial System (UAS) has the benefits of convenience and flexibility to collect remote sensing imagery for precision agriculture, vegetation monitoring, and environment investigation applications. Most miniature MS cameras adopt a multi-lens structure to record discrete MS bands of visible and invisible information. The differences in lens distortion, mounting positions, and viewing angles among lenses mean that the acquired original MS images have significant band misregistration errors. We have developed a Robust and Adaptive Band-to-Band Image Transform (RABBIT) method for dealing with the band co-registration of various types of miniature multi-lens multispectral cameras (Mini-MSCs) to obtain band co-registered MS imagery for remote sensing applications. The RABBIT utilizes modified projective transformation (MPT) to transfer the multiple image geometry of a multi-lens imaging system to one sensor geometry, and combines this with a robust and adaptive correction (RAC) procedure to correct several systematic errors and to obtain sub-pixel accuracy. This study applies three state-of-the-art Mini-MSCs to evaluate the RABBIT method’s performance, specifically the Tetracam Miniature Multiple Camera Array (MiniMCA), Micasense RedEdge, and Parrot Sequoia. Six MS datasets acquired at different target distances and dates, and locations are also applied to prove its reliability and applicability. Results prove that RABBIT is feasible for different types of Mini-MSCs with accurate, robust, and rapid image processing efficiency.
- Published
- 2018
- Full Text
- View/download PDF
31. The use of UAV remote sensing for observing lava dome emplacement and areas of potential lahar hazards: An example from the 2017–2019 eruption crisis at Mount Agung in Bali
- Author
-
Heruningtyas Desi Purnamasari, Devy Kamil Syahbana, Ruli Andaru, Jiann Yeou Rau, and Ardy Setya Prayoga
- Subjects
geography ,geography.geographical_feature_category ,010504 meteorology & atmospheric sciences ,Lahar ,Lava dome ,Terrain ,010502 geochemistry & geophysics ,01 natural sciences ,Dome (geology) ,Geophysics ,Photogrammetry ,Impact crater ,Volcano ,13. Climate action ,Geochemistry and Petrology ,Remote sensing (archaeology) ,Geology ,0105 earth and related environmental sciences ,Remote sensing - Abstract
Mount Agung (the highest volcano in Bali, Indonesia) began to erupt on November 21, 2017, after having been dormant for 53 years. More than 100,000 people were evacuated within the hazard zone between September 2017 (when the highest volcanic alert was issued) and early 2018. The eruptions continued until June 2019, accompanied by at least 110 explosions. During the eruptive crisis, the observation of the lava dome's emplacement was essential for mitigating the potential hazard. Details of the lava dome growth, including the volumetric changes and effusion rates, provide valuable information about potential eruption scenarios and lahar depositions. In this paper, the essential role of multi-temporal unmanned aerial vehicle (UAV) images in the monitoring of Mt. Agung's lava dome, and in determining the areas of potential lahar hazards during the crisis between 2017 and 2019 is described. A fixed-wing UAV was launched outside the hazard zone to photograph the lava dome on five occasions. Image enhancement, machine learning, and photogrammetry were combined to improve image quality, remove point clouds outliers, and generate digital terrain models (DTMs) and orthoimages. The analysis of the obtained DTMs and orthoimages resulted in qualitative and quantitative data highlighting the changes inside the crater and on the surrounding slopes. These results reveal that, from November 25 to December 16, 2017, the lava dome grew vertically by 126 m and reached a volume of 26.86 ± 0.64 × 106 m3. In addition, its surface experienced a maximal uplift of approximately 52 m until July 2019 with the emergence of a new dome with a volume estimated at 9.52 ± 0.086 × 106 m3. The difference between the DTMs of 2017 and 2019 reveals the total volume of erupted material (886,100 ± 8000 m3) that was deposited on the surrounding slopes. According to the lahar inundation simulation, the deposited material may cause dangerous lahars in 21 drainages, which extend in the north (N), north-east (N-E), south (S), south-east (S-E), and south-west (S-W) sectors of the volcano. This paper presents the use of UAV remote sensing for the production of high-spatial resolution DTMs, which can be used to both observe the emplacement of a lava dome, and to identify areas with potential lahar risk during a volcano crisis.
- Published
- 2021
- Full Text
- View/download PDF
32. BRIDGE CRACK DETECTION USING MULTI-ROTARY UAV AND OBJECT-BASE IMAGE ANALYSIS
- Author
-
J. L. Wang, J. P. Jhan, Sendo Wang, W. C. Fang, K. W. Hsiao, and Jiann Yeou Rau
- Subjects
lcsh:Applied optics. Photonics ,business.product_category ,Computer science ,Coordinate system ,0211 other engineering and technologies ,Image processing ,02 engineering and technology ,lcsh:Technology ,Minimum bounding box ,021105 building & construction ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Digital camera ,Orientation (computer vision) ,business.industry ,lcsh:T ,lcsh:TA1501-1820 ,Structural engineering ,Spall ,Bridge inspection ,Visualization ,Triangulation (geometry) ,lcsh:TA1-2040 ,020201 artificial intelligence & image processing ,Artificial intelligence ,Focus (optics) ,business ,lcsh:Engineering (General). Civil engineering (General) - Abstract
Bridge is an important infrastructure for human life. Thus, the bridge safety monitoring and maintaining is an important issue to the government. Conventionally, bridge inspection were conducted by human in-situ visual examination. This procedure sometimes require under bridge inspection vehicle or climbing under the bridge personally. Thus, its cost and risk is high as well as labor intensive and time consuming. Particularly, its documentation procedure is subjective without 3D spatial information. In order cope with these challenges, this paper propose the use of a multi-rotary UAV that equipped with a SONY A7r2 high resolution digital camera, 50 mm fixed focus length lens, 135 degrees up-down rotating gimbal. The target bridge contains three spans with a total of 60 meters long, 20 meters width and 8 meters height above the water level. In the end, we took about 10,000 images, but some of them were acquired by hand held method taken on the ground using a pole with 2–8 meters long. Those images were processed by Agisoft PhotoscanPro to obtain exterior and interior orientation parameters. A local coordinate system was defined by using 12 ground control points measured by a total station. After triangulation and camera self-calibration, the RMS of control points is less than 3 cm. A 3D CAD model that describe the bridge surface geometry was manually measured by PhotoscanPro. They were composed of planar polygons and will be used for searching related UAV images. Additionally, a photorealistic 3D model can be produced for 3D visualization. In order to detect cracks on the bridge surface, we utilize object-based image analysis (OBIA) technique to segment the image into objects. Later, we derive several object features, such as density, area/bounding box ratio, length/width ratio, length, etc. Then, we can setup a classification rule set to distinguish cracks. Further, we apply semi-global-matching (SGM) to obtain 3D crack information and based on image scale we can calculate the width of a crack object. For spalling volume calculation, we also apply SGM to obtain dense surface geometry. Assuming the background is a planar surface, we can fit a planar function and convert the surface geometry into a DSM. Thus, for spalling area its height will be lower than the plane and its value will be negative. We can thus apply several image processing technique to segment the spalling area and calculate the spalling volume as well. For bridge inspection and UAV image management within a laboratory, we develop a graphic user interface. The major functions include crack auto-detection using OBIA, crack editing, i.e. delete and add cracks, crack attributing, 3D crack visualization, spalling area/volume calculation, bridge defects documentation, etc.
- Published
- 2017
33. Development of a Large-Format UAS Imaging System With the Construction of a One Sensor Geometry From a Multicamera Array
- Author
-
Jyun Ping Jhan, Yi Tang Li, and Jiann Yeou Rau
- Subjects
010504 meteorology & atmospheric sciences ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,Triangulation (computer vision) ,Geometry ,02 engineering and technology ,Large format ,Collinearity ,Topographic map ,01 natural sciences ,Image stitching ,Digital image ,Data acquisition ,Transformation (function) ,Photogrammetry ,Homography ,General Earth and Planetary Sciences ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences - Abstract
For the purpose of large-area topographic mapping, this study proposes an imaging system based on a multicamera array unmanned aerial system (UAS) comprised of five small-format digital cameras with a total field of view of 127°. The five digital cameras are aligned in a row along the across-track direction with overlap between two neighboring cameras. The suggested system has higher data acquisition efficiency than the single-camera UAS imaging system. For topographic mapping purposes, we develop a modified projective transformation method to stitch all five raw images into one sensor geometry. In this method, the transformation coefficients are obtained by on-the-job multicamera self-calibration, including interior and relative orientations. During the stitching process, two systematic errors are detected and corrected. In the end, a large-format digital image can be produced for each trigger event independently. The photogrammetric collinearity condition is evaluated using several external accuracy assessments, such as conventional aerial triangulation, stereoplotting, and digital surface model generation procedures. From the accuracy assessment results, we conclude that the presented raw image stitching method can be used to construct a one sensor geometry from a multicamera array and is feasible for 3-D mapping applications.
- Published
- 2016
- Full Text
- View/download PDF
34. 4D ANIMATION RECONSTRUCTION FROM MULTI-CAMERA COORDINATES TRANSFORMATION
- Author
-
J. P. Jhan, C. M. Chou, and Jiann Yeou Rau
- Subjects
lcsh:Applied optics. Photonics ,Engineering ,lcsh:T ,business.industry ,Computation ,020208 electrical & electronic engineering ,0211 other engineering and technologies ,lcsh:TA1501-1820 ,02 engineering and technology ,Animation ,Multi camera ,lcsh:Technology ,Construction site safety ,Cost reduction ,Photogrammetry ,lcsh:TA1-2040 ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Artificial intelligence ,lcsh:Engineering (General). Civil engineering (General) ,Cofferdam ,business ,Towing ,021101 geological & geomatics engineering - Abstract
Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP) in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australis© coded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.
- Published
- 2016
- Full Text
- View/download PDF
35. Band-to-band registration and ortho-rectification of multilens/multispectral imagery: A case study of MiniMCA-12 acquired by a fixed-wing UAS
- Author
-
Jyun Ping Jhan, Cho-ying Huang, and Jiann Yeou Rau
- Subjects
010504 meteorology & atmospheric sciences ,Pixel ,Computer science ,Orientation (computer vision) ,business.industry ,Multispectral image ,0211 other engineering and technologies ,Triangulation (computer vision) ,02 engineering and technology ,01 natural sciences ,Atomic and Molecular Physics, and Optics ,Computer Science Applications ,Temporal resolution ,RGB color model ,Computer vision ,Artificial intelligence ,Computers in Earth Sciences ,business ,Engineering (miscellaneous) ,Image resolution ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Camera resectioning ,Remote sensing - Abstract
MiniMCA (Miniature Multiple Camera Array) is a lightweight, frame-based, and multilens composed multispectral sensor, which is suitable to mount on an unmanned aerial systems (UAS) to acquire high spatial and temporal resolution imagery for various remote sensing applications. Since MiniMCA has significant band misregistration effect, an automatic and precise band-to-band registration (BBR) method is proposed in this study. Based on the principle of sensor plane-to-plane projection, a modified projective transformation (MPT) model is developed. It is to estimate all coefficients of MPT from indoor camera calibration, together with two systematic errors correction. Therefore, we can transfer all bands into the same image space. Quantitative error analysis shows that the proposed BBR scheme is scene independent and can achieve 0.33 pixels of accuracy, which demonstrating the proposed method is accurate and reliable. Meanwhile, it is difficult to mark ground control points (GCPs) on the MiniMCA images, as its spatial resolution is low when the flight height is higher than 400 m. In this study, a higher resolution RGB camera is adopted to produce digital surface model (DSM) and assist MiniMCA ortho-image generation. After precise BBR, only one reference band of MiniMCA image is necessary for aerial triangulation because all bands have same exterior and interior orientation parameters. It means that all the MiniMCA imagery can be ortho-rectified through the same exterior and interior orientation parameters of the reference band. The result of the proposed ortho-rectification procedure shows the co-registration errors between MiniMCA reference band and the RGB ortho-images is less than 0.6 pixels.
- Published
- 2016
- Full Text
- View/download PDF
36. Underwater 3D Rigid Object Tracking and 6-DOF Estimation: A Case Study of Giant Steel Pipe Scale Model Underwater Installation
- Author
-
Chih Ming Chou, Jyun Ping Jhan, and Jiann Yeou Rau
- Subjects
0209 industrial biotechnology ,Computer science ,Science ,Coordinate system ,Field of view ,02 engineering and technology ,Tracking (particle physics) ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,underwater photogrammetry ,object tracking ,6-DOF ,camera calibration ,Underwater ,business.industry ,Orientation (computer vision) ,020208 electrical & electronic engineering ,Photogrammetry ,Video tracking ,General Earth and Planetary Sciences ,Artificial intelligence ,business ,Camera resectioning - Abstract
The Zengwen desilting tunnel project installed an Elephant Trunk Steel Pipe (ETSP) at the bottom of the reservoir that is designed to connect the new bypass tunnel and reach downward to the sediment surface. Since ETSP is huge and its underwater installation is an unprecedented construction method, there are several uncertainties in its dynamic motion changes during installation. To assure construction safety, a 1:20 ETSP scale model was built to simulate the underwater installation procedure, and its six-degrees-of-freedom (6-DOF) motion parameters were monitored by offline underwater 3D rigid object tracking and photogrammetry. Three cameras were used to form a multicamera system, and several auxiliary devices—such as waterproof housing, tripods, and a waterproof LED—were adopted to protect the cameras and to obtain clear images in the underwater environment. However, since it is difficult for the divers to position the camera and ensure the camera field of view overlap, each camera can only observe the head, middle, and tail parts of ETSP, respectively, leading to a small overlap area among all images. Therefore, it is not possible to perform a traditional method via multiple images forward intersection, where the camera’s positions and orientations have to be calibrated and fixed in advance. Instead, by tracking the 3D coordinates of ETSP and obtaining the camera orientation information via space resection, we propose a multicamera coordinate transformation and adopted a single-camera relative orientation transformation to calculate the 6-DOF motion parameters. The offline procedure is to first acquire the 3D coordinates of ETSP by taking multiposition images with a precalibrated camera in the air and then use the 3D coordinates as control points to perform the space resection of the calibrated underwater cameras. Finally, we calculated the 6-DOF of ETSP by using the camera orientation information through both multi- and single-camera approaches. In this study, we show the results of camera calibration in the air and underwater environment, present the 6-DOF motion parameters of ETSP underwater installation and the reconstructed 4D animation, and compare the differences between the multi- and single-camera approaches.
- Published
- 2020
- Full Text
- View/download PDF
37. Combining Unmanned Aerial Vehicles, and Internet Protocol Cameras to Reconstruct 3-D Disaster Scenes During Rescue Operations
- Author
-
Chia Chang Chuang, Jiann Yeou Rau, Meng-Kuan Lai, and Chung Liang Shih
- Subjects
Emergency Medical Services ,Aircraft ,Taiwan ,030204 cardiovascular system & hematology ,Emergency Nursing ,Computer security ,computer.software_genre ,law.invention ,03 medical and health sciences ,0302 clinical medicine ,Resource (project management) ,law ,Internet Protocol ,Earthquakes ,Rescue Work ,Medicine ,Humans ,Internet ,business.industry ,030208 emergency & critical care medicine ,Models, Theoretical ,Emergency response ,Emergency Medicine ,Geographic Information Systems ,business ,computer - Abstract
Objective: Strong earthquakes often cause massive structural and nonstructural damage, timely assessment of the catastrophe related massive casualty incidents (MCIs) for deploying rescue resource a...
- Published
- 2018
38. A MODIFIED PROJECTIVE TRANSFORMATION SCHEME FOR MOSAICKING MULTI-CAMERA IMAGING SYSTEM EQUIPPED ON A LARGE PAYLOAD FIXED-WING UAS
- Author
-
Jiann Yeou Rau, Y. T. Li, and J. P. Jhan
- Subjects
lcsh:Applied optics. Photonics ,Pixel ,lcsh:T ,business.industry ,Payload (computing) ,lcsh:TA1501-1820 ,Triangulation (computer vision) ,Bundle adjustment ,IOPS ,Field of view ,lcsh:Technology ,Geography ,lcsh:TA1-2040 ,Virtual image ,Nadir ,Computer vision ,Artificial intelligence ,lcsh:Engineering (General). Civil engineering (General) ,business ,Remote sensing - Abstract
In recent years, Unmanned Aerial System (UAS) has been applied to collect aerial images for mapping, disaster investigation, vegetation monitoring and etc. It is a higher mobility and lower risk platform for human operation, but the low payload and short operation time reduce the image collection efficiency. In this study, one nadir and four oblique consumer grade DSLR cameras composed multiple camera system is equipped on a large payload UAS, which is designed to collect large ground coverage images in an effective way. The field of view (FOV) is increased to 127 degree, which is thus suitable to collect disaster images in mountainous area. The synthetic acquired five images are registered and mosaicked as larger format virtual image for reducing the number of images, post processing time, and for easier stereo plotting. Instead of traditional image matching and applying bundle adjustment method to estimate transformation parameters, the IOPs and ROPs of multiple cameras are calibrated and derived the coefficients of modified projective transformation (MPT) model for image mosaicking. However, there are some uncertainty of indoor calibrated IOPs and ROPs since the different environment conditions as well as the vibration of UAS, which will cause misregistration effect of initial MPT results. Remaining residuals are analysed through tie points matching on overlapping area of initial MPT results, in which displacement and scale difference are introduced and corrected to modify the ROPs and IOPs for finer registration results. In this experiment, the internal accuracy of mosaic image is better than 0.5 pixels after correcting the systematic errors. Comparison between separate cameras and mosaic images through rigorous aerial triangulation are conducted, in which the RMSE of 5 control and 9 check points is less than 5 cm and 10 cm in planimetric and vertical directions, respectively, for all cases. It proves that the designed imaging system and the proposed scheme have potential to create large scale topographic map.
- Published
- 2018
39. SYSTEMATIC CALIBRATION FOR A BACKPACKED SPHERICAL PHOTOGRAMMETRY IMAGING SYSTEM
- Author
-
Jiann Yeou Rau, J. P. Jhan, K. W. Hsiao, and B. W. Su
- Subjects
lcsh:Applied optics. Photonics ,010504 meteorology & atmospheric sciences ,lcsh:T ,business.industry ,Orientation (computer vision) ,lcsh:TA1501-1820 ,Field of view ,02 engineering and technology ,lcsh:Technology ,01 natural sciences ,Odometer ,Spherical image ,Photogrammetry ,Geography ,lcsh:TA1-2040 ,0202 electrical engineering, electronic engineering, information engineering ,Calibration ,Structure from motion ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,lcsh:Engineering (General). Civil engineering (General) ,business ,0105 earth and related environmental sciences ,Mobile mapping - Abstract
A spherical camera can observe the environment for almost 720 degrees’ field of view in one shoot, which is useful for augmented reality, environment documentation, or mobile mapping applications. This paper aims to develop a spherical photogrammetry imaging system for the purpose of 3D measurement through a backpacked mobile mapping system (MMS). The used equipment contains a Ladybug-5 spherical camera, a tactical grade positioning and orientation system (POS), i.e. SPAN-CPT, and an odometer, etc. This research aims to directly apply photogrammetric space intersection technique for 3D mapping from a spherical image stereo-pair. For this purpose, several systematic calibration procedures are required, including lens distortion calibration, relative orientation calibration, boresight calibration for direct georeferencing, and spherical image calibration. The lens distortion is serious on the ladybug-5 camera’s original 6 images. Meanwhile, for spherical image mosaicking from these original 6 images, we propose the use of their relative orientation and correct their lens distortion at the same time. However, the constructed spherical image still contains systematic error, which will reduce the 3D measurement accuracy. Later for direct georeferencing purpose, we need to establish a ground control field for boresight/lever-arm calibration. Then, we can apply the calibrated parameters to obtain the exterior orientation parameters (EOPs) of all spherical images. In the end, the 3D positioning accuracy after space intersection will be evaluated, including EOPs obtained by structure from motion method.
- Published
- 2018
40. ORTHO-RECTIFICATION OF NARROW BAND MULTI-SPECTRAL IMAGERY ASSISTED BY DSLR RGB IMAGERY ACQUIRED BY A FIXED-WING UAS
- Author
-
Cho-ying Huang, Jyun Ping Jhan, and Jiann Yeou Rau
- Subjects
lcsh:Applied optics. Photonics ,Pixel ,Remote sensing application ,business.industry ,Orientation (computer vision) ,lcsh:T ,Perspective (graphical) ,Multispectral image ,lcsh:TA1501-1820 ,lcsh:Technology ,Geography ,lcsh:TA1-2040 ,Temporal resolution ,RGB color model ,Computer vision ,Artificial intelligence ,business ,lcsh:Engineering (General). Civil engineering (General) ,Image resolution ,Remote sensing - Abstract
Miniature Multiple Camera Array (MiniMCA-12) is a frame-based multilens/multispectral sensor composed of 12 lenses with narrow band filters. Due to its small size and light weight, it is suitable to mount on an Unmanned Aerial System (UAS) for acquiring high spectral, spatial and temporal resolution imagery used in various remote sensing applications. However, due to its wavelength range is only 10 nm that results in low image resolution and signal-to-noise ratio which are not suitable for image matching and digital surface model (DSM) generation. In the meantime, the spectral correlation among all 12 bands of MiniMCA images are low, it is difficult to perform tie-point matching and aerial triangulation at the same time. In this study, we thus propose the use of a DSLR camera to assist automatic aerial triangulation of MiniMCA-12 imagery and to produce higher spatial resolution DSM for MiniMCA12 ortho-image generation. Depending on the maximum payload weight of the used UAS, these two kinds of sensors could be collected at the same time or individually. In this study, we adopt a fixed-wing UAS to carry a Canon EOS 5D Mark2 DSLR camera and a MiniMCA-12 multi-spectral camera. For the purpose to perform automatic aerial triangulation between a DSLR camera and the MiniMCA-12, we choose one master band from MiniMCA-12 whose spectral range has overlap with the DSLR camera. However, all lenses of MiniMCA-12 have different perspective centers and viewing angles, the original 12 channels have significant band misregistration effect. Thus, the first issue encountered is to reduce the band misregistration effect. Due to all 12 MiniMCA lenses being frame-based, their spatial offsets are smaller than 15 cm and all images are almost 98% overlapped, we thus propose a modified projective transformation (MPT) method together with two systematic error correction procedures to register all 12 bands of imagery on the same image space. It means that those 12 bands of images acquired at the same exposure time will have same interior orientation parameters (IOPs) and exterior orientation parameters (EOPs) after band-to-band registration (BBR). Thus, in the aerial triangulation stage, the master band of MiniMCA-12 was treated as a reference channel to link with DSLR RGB images. It means, all reference images from the master band of MiniMCA-12 and all RGB images were triangulated at the same time with same coordinate system of ground control points (GCP). Due to the spatial resolution of RGB images is higher than the MiniMCA-12, the GCP can be marked on the RGB images only even they cannot be recognized on the MiniMCA images. Furthermore, a one meter gridded digital surface model (DSM) is created by the RGB images and applied to the MiniMCA imagery for ortho-rectification. Quantitative error analyses show that the proposed BBR scheme can achieve 0.33 pixels of average misregistration residuals length and the co-registration errors among 12 MiniMCA ortho-images and between MiniMCA and Canon RGB ortho-images are all less than 0.6 pixels. The experimental results demonstrate that the proposed method is robust, reliable and accurate for future remote sensing applications.
- Published
- 2018
41. NEW METHOD FOR THE CALIBRATION OF MULTI-CAMERA MOBILE MAPPING SYSTEMS
- Author
-
Jiann Yeou Rau, Ana Paula Kersting, and Ayman Habib
- Subjects
lcsh:Applied optics. Photonics ,010504 meteorology & atmospheric sciences ,GPS/INS ,0211 other engineering and technologies ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,lcsh:Technology ,01 natural sciences ,Inertial measurement unit ,Calibration ,Computer vision ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,lcsh:T ,business.industry ,Orientation (computer vision) ,Process (computing) ,lcsh:TA1501-1820 ,Collinearity equation ,Geography ,lcsh:TA1-2040 ,Global Positioning System ,Artificial intelligence ,lcsh:Engineering (General). Civil engineering (General) ,business ,Mobile mapping - Abstract
Mobile Mapping Systems (MMS) allow for fast and cost-effective collection of geo-spatial information. Such systems integrate a set of imaging sensors and a position and orientation system (POS), which entails GPS and INS units. System calibration is a crucial process to ensure the attainment of the expected accuracy of such systems. It involves the calibration of the individual sensors as well as the calibration of the mounting parameters relating the system components. The mounting parameters of multi-camera MMS include two sets of relative orientation parameters (ROP): the lever arm offsets and the boresight angles relating the cameras and the IMU body frame and the ROP among the cameras (in the absence of GPS/INS data). In this paper, a novel single-step calibration method, which has the ability of estimating these two sets of ROP, is devised. Besides the ability to estimate the ROP among the cameras, the proposed method can use such parameters as prior information in the ISO procedure. The implemented procedure consists of an integrated sensor orientation (ISO) where the GPS/INS-derived position and orientation and the system mounting parameters are directly incorporated in the collinearity equations. The concept of modified collinearity equations has been used by few authors for single-camera systems. In this paper, a new modification to the collinearity equations for GPS/INS-assisted multicamera systems is introduced. Experimental results using a real dataset demonstrate the feasibility of the proposed method.
- Published
- 2018
42. INVESTIGATION OF PARALLAX ISSUES FOR MULTI-LENS MULTISPECTRAL CAMERA BAND CO-REGISTRATION
- Author
-
J. P. Jhan, Norbert Haala, Jiann Yeou Rau, and Michael Cramer
- Subjects
lcsh:Applied optics. Photonics ,Systematic error ,010504 meteorology & atmospheric sciences ,Multispectral image ,0211 other engineering and technologies ,Co registration ,02 engineering and technology ,lcsh:Technology ,01 natural sciences ,law.invention ,law ,Calibration ,Computer vision ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Remote sensing ,lcsh:T ,business.industry ,lcsh:TA1501-1820 ,Lens (optics) ,Geography ,lcsh:TA1-2040 ,Artificial intelligence ,Image transformation ,lcsh:Engineering (General). Civil engineering (General) ,Parallax ,Focus (optics) ,business - Abstract
The multi-lens multispectral cameras (MSCs), such as Micasense Rededge and Parrot Sequoia, can record multispectral information by each separated lenses. With their lightweight and small size, which making they are more suitable for mounting on an Unmanned Aerial System (UAS) to collect high spatial images for vegetation investigation. However, due to the multi-sensor geometry of multi-lens structure induces significant band misregistration effects in original image, performing band co-registration is necessary in order to obtain accurate spectral information. A robust and adaptive band-to-band image transform (RABBIT) is proposed to perform band co-registration of multi-lens MSCs. First is to obtain the camera rig information from camera system calibration, and utilizes the calibrated results for performing image transformation and lens distortion correction. Since the calibration uncertainty leads to different amount of systematic errors, the last step is to optimize the results in order to acquire a better co-registration accuracy. Due to the potential issues of parallax that will cause significant band misregistration effects when images are closer to the targets, four datasets thus acquired from Rededge and Sequoia were applied to evaluate the performance of RABBIT, including aerial and close-range imagery. From the results of aerial images, it shows that RABBIT can achieve sub-pixel accuracy level that is suitable for the band co-registration purpose of any multi-lens MSC. In addition, the results of close-range images also has same performance, if we focus on the band co-registration on specific target for 3D modelling, or when the target has equal distance to the camera.
- Published
- 2018
43. THE PERFORMANCE ANALYSIS OF A UAV BASED MOBILE MAPPING SYSTEM
- Author
-
Meng Lun Tsai, Cheng Fang Lo, Yi Hsing Tseng, Yun Wen Huang, Jiann Yeou Rau, and Kai-Wei Chiang
- Subjects
lcsh:Applied optics. Photonics ,business.product_category ,Orientation (computer vision) ,business.industry ,lcsh:T ,Real-time computing ,lcsh:TA1501-1820 ,Communications system ,lcsh:Technology ,Flight test ,Photogrammetry ,Geography ,lcsh:TA1-2040 ,Global Positioning System ,Computer vision ,Artificial intelligence ,Differential GPS ,business ,lcsh:Engineering (General). Civil engineering (General) ,Mobile mapping ,Digital camera - Abstract
In order to facilitate applications such as environment detection or disaster monitoring, developing a quickly and low cost system to collect near real time spatial information is very important. Such a rapid spatial information collection capability has become an emerging trend in the technology of remote sensing and mapping application. In this study, a fixed-wing UAV based spatial information acquisition platform is developed and evaluated. The proposed UAV based platform has a direct georeferencing module including an low cost INS/GPS integrated system, low cost digital camera as well as other general UAV modules including immediately video monitoring communication system. This direct georeferencing module is able to provide differential GPS processing with single frequency carrier phase measurements to obtain sufficient positioning accuracy. All those necessary calibration procedures including interior orientation parameters, the lever arm and boresight angle are implemented. In addition, a flight test is performed to verify the positioning accuracy in direct georeferencing mode without using any ground control point that is required for most of current UAV based photogrammetric platforms. In other word, this is one of the pilot studies concerning direct georeferenced based UAV photogrammetric platform. The preliminary results in term of positioning accuracy in direct georeferenced mode without using any GCP illustrate horizontal positioning accuracies in x and y axes are both less than 20 meters, respectively. On the contrary, the positioning accuracy of z axis is less than 50 meters with 600 meters flight height above ground. Such accuracy is good for near real time disaster relief. Therefore, it is a relatively safe and cheap platform to collect critical spatial information for urgent response such as disaster relief and assessment applications where ground control points are not available.
- Published
- 2018
44. Accuracy Analysis of Three-Dimensional Model Reconstructed by Spherical Video Images
- Author
-
Firdaus, Muhammad Irsyadi and Jiann-Yeou Rau
- Published
- 2018
- Full Text
- View/download PDF
45. Use of principal components of UAV-acquired narrow-band multispectral imagery to map the diverse low stature vegetation fAPAR
- Author
-
Hsin Lin Wei, Jiann Yeou Rau, Jyun Ping Jhan, and Cho-ying Huang
- Subjects
010504 meteorology & atmospheric sciences ,digestive, oral, and skin physiology ,Multispectral image ,0211 other engineering and technologies ,02 engineering and technology ,01 natural sciences ,Narrow band ,Photosynthetically active radiation ,Principal component analysis ,medicine ,General Earth and Planetary Sciences ,Environmental science ,Precision agriculture ,Leaf area index ,medicine.symptom ,Vegetation (pathology) ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Remote sensing - Abstract
The fraction of absorbed photosynthetically active radiation (fAPAR) is an important plant physiological index that is used to assess the ability of vegetation to absorb PAR, which is utilized to sequester carbon in the atmosphere. This index is also important for monitoring plant health and productivity, which has been widely used to monitor low stature crops and is a crucial metric for food security assessment. The fAPAR has been commonly correlated with a greenness index derived from spaceborne optical imagery, but the relatively coarse spatial or temporal resolution may prohibit its application on complex land surfaces. In addition, the relationships between fAPAR and remotely sensed greenness data may be influenced by the heterogeneity of canopies. Multispectral and hyperspectral unmanned aerial vehicle (UAV) imaging systems, conversely, can provide several spectral bands at sub-meter resolutions, permitting precise estimation of fAPAR using chemometrics. However, the data pre-processing procedures are cumbersome, which makes large-scale mapping challenging. In this study, we applied a set of well-verified image processing protocols and a chemometric model to a lightweight, frame-based and narrow-band (10 nm) UAV imaging system to estimate the fAPAR over a relatively large cultivated land area with a variety of low stature vegetation of tropical crops along with native and non-native grasses. A principal component regression was applied to 12 bands of spectral reflectance data to minimize the collinearity issue and compress the data variation. Stepwise regression was employed to reduce the data dimensionality, and the first, third and fifth components were selected to estimate the fAPAR. Our results indicate that 77% of the fAPAR variation was explained by the model. All bands that are sensitive to foliar pigment concentrations, canopy structure and/or leaf water content may contribute to the estimation, especially those located close to (720 nm) or within (750 nm and 780 nm) the near-infrared spectral region. This study demonstrates that this narrow-band frame-based UAV system would be useful for vegetation monitoring. With proper pre-flight planning and hardware improvement, the mapping of a narrow-band multispectral UAV system could be comparable to that of a manned aircraft system.
- Published
- 2018
- Full Text
- View/download PDF
46. Analysis of Oblique Aerial Images for Land Cover and Point Cloud Classification in an Urban Environment
- Author
-
Ya Ching Hsu, Jyun Ping Jhan, and Jiann Yeou Rau
- Subjects
Computer science ,business.industry ,Point cloud ,Oblique case ,Land cover ,Spectral bands ,Object (computer science) ,Photogrammetry ,General Earth and Planetary Sciences ,RGB color model ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Remote sensing ,Feature detection (computer vision) - Abstract
In addition to aerial imagery, point clouds are important remote sensing data in urban environment studies. It is essential to extract semantic information from both images and point clouds for such purposes; thus, this study aims to automatically classify 3-D point clouds generated using oblique aerial imagery (OAI)/vertical aerial imagery (VAI) into various urban object classes, such as roof, facade, road, tree, and grass. A multicamera airborne imaging system that can simultaneously acquire VAI and OAI is suggested. The acquired small-format images contain only three RGB spectral bands and are used to generate photogrammetric point clouds through a multiview-stereo dense matching technique. To assign each 3-D point cloud to a corresponding urban object class, we first analyzed the original OAI through object-based image analyses. A rule-based hierarchical semantic classification scheme that utilizes spectral information and geometry- and topology-related features was developed, in which the object height and gradient features were derived from the photogrammetric point clouds to assist in the detection of elevated objects, particularly for the roof and facade. Finally, the photogrammetric point clouds were classified into the aforementioned five classes. The classification accuracy was assessed on the image space, and four experimental results showed that the overall accuracy is between 82.47% and 91.8%. In addition, visual and consistency analyses were performed to demonstrate the proposed classification scheme's feasibility, transferability, and reliability, particularly for distinguishing elevated objects from OAI, which has a severe occlusion effect, image-scale variation, and ambiguous spectral characteristics.
- Published
- 2015
- Full Text
- View/download PDF
47. DSM Extraction from Pleiades Images Using RSP
- Author
-
Firdaus, Muhammad Irsyadi and Jiann-Yeou Rau
- Published
- 2017
- Full Text
- View/download PDF
48. Semiautomatic Object-Oriented Landslide Recognition Scheme From Multisensor Optical Imagery and DEM
- Author
-
J. P. Jhan, Jiann Yeou Rau, and Ruey Juin Rau
- Subjects
Watershed ,Digital mapping ,business.industry ,Computer science ,Multispectral image ,Land management ,Landslide ,Topographic map ,Digital image ,Typhoon ,General Earth and Planetary Sciences ,Segmentation ,Computer vision ,Artificial intelligence ,Digital single-lens reflex camera ,Electrical and Electronic Engineering ,business ,Digital elevation model ,Remote sensing - Abstract
Rainfall-induced landslides are a major threat in Taiwan, particularly during the typhoon season. A precise survey of landslides after a super event is a critical task for disaster, watershed, and forestry land management. In this paper, we utilize high spatial resolution multispectral optical imagery and a digital elevation model (DEM) with an object-oriented analysis technique to develop a scheme for the recognition of landslides using multilevel segmentation and a hierarchical semantic network. Four case studies are presented to evaluate the feasibility of the proposed scheme. Three kinds of remote sensing imagery, namely pan-sharpened FORMOSAT-2 satellite images, aerial digital images from Z/I digital mapping camera, and images acquired by a digital single lens reflex camera mounted on a fixed-wing unmanned aerial vehicle are used. An accuracy assessment is accomplished by evaluating three test sites containing hundreds of landslides associated with the Typhoon Morakot. The input data include ortho-rectified image and DEM. Four spectral and one topographic object features are derived for semiautomatic landslide recognition. The threshold values are determined semiautomatically by statistical estimation from a few training samples. The experimental results show that the proposed approach can counteract the commission/omission errors and achieve missing/branching factors at less than 0.12 with a quality percentage of 81.7%. The results demonstrate the feasibility and accuracy of the proposed landslide recognition scheme even when different optical sensors are utilized.
- Published
- 2014
- Full Text
- View/download PDF
49. A cost-effective strategy for multi-scale photo-realistic building modeling and web-based 3-D GIS applications in real estate
- Author
-
Chen Kuang Cheng and Jiann Yeou Rau
- Subjects
Distributed GIS ,Geospatial analysis ,Database ,business.industry ,Ecological Modeling ,Geography, Planning and Development ,computer.software_genre ,Urban Studies ,Computer graphics ,Geography ,GIS applications ,Web application ,The Internet ,Data mining ,Enterprise GIS ,business ,computer ,AM/FM/GIS ,General Environmental Science - Abstract
Web-based 3-D GIS may be the most appropriate tool for decision makers in land management and development. It provides not only the basic GIS functions, but also visually realistic landscape and architectural detail. It also gives the user an immersive 3-D virtual reality environment through the Internet that is rather different from that obtained merely through text, pictures, or videos. However, in terms of high accuracy and level-of-detail (LOD), the generation of a fully photo-realistic city model is labor intensive and time consuming. At the same time, from the aspect of computer graphics, the result is simply a geometric model without thematic information. Thus, the objective of this study is to propose a cost-effective multi-scale building modeling strategy based on the 2-D GIS building footprint that has rich attributes and to realize its application in the real estate market through a web-based 3-D GIS platform. Generally, the data volume needed for a photo-realistic city model is huge, thus for the purpose of increasing Internet data streaming efficiency and reducing the building modeling cost, a multiple-scale building modeling strategy, including block modeling, generic texture modeling, photo-realistic economic modeling , and photo-realistic detailed modeling is proposed. Since 2-D building boundary polygons are popularly used and well attributed, e.g., as to number of stories, address, type, material, etc., we are able to construct the photo-realistic city model based on this. Meanwhile, the conventional 2-D spatial analysis can be maintained and extended to 3-D GIS in the proposed scheme. For real estate applications, a location query system for selecting the optimum living environment is established. Some geospatial query and analysis functionalities are realized, such as address and road-junction positioning and terrain profile analysis. An experimental study area of 11 km 2 in size is used to demonstrate that the proposed multi-scale building modeling strategy and its integration into a web-based 3-D GIS platform is both efficient and cost-effective.
- Published
- 2013
- Full Text
- View/download PDF
50. Directly Georeferenced Ground Feature Points with UAV Borne Photogrammetric Platform without Ground Control
- Author
-
Meng Lun Tsai, Jiann Yeou Rau, Kai-Wei Chiang, and Cheng Fang Lo
- Subjects
business.product_category ,business.industry ,Computer science ,Orientation (computer vision) ,General Medicine ,Flight test ,Photogrammetry ,Georeference ,Global Positioning System ,business ,Differential GPS ,Spatial analysis ,Remote sensing ,Digital camera - Abstract
In order to facilitate applications such as environment detection or disaster monitoring, developing a quickly and low cost system to collect near real time spatial information is very important. Such a rapid spatial information collection capability has become an emerging trend in the technology of remote sensing and mapping application. In this study, a fixed-wing UAV based spatial information acquisition platform is developed and evaluated. The proposed UAV based platform has a direct georeferencing module including an low cost INS/GPS integrated system, low cost digital camera as well as other general UAV modules including immediately video monitoring communication system. This direct georeferencing module is able to provide differential GPS processing with single frequency carrier phase measurements to obtain sufficient positioning accuracy. All those necessary calibration procedures including interior orientation parameters, the lever arm and boresight angle are implemented. In addition, a flight test is performed to verify the positioning accuracy in direct georeferencing mode without using any ground control point that is required for most of current UAV based photogrammetric platforms. In other word, this is one of the pilot studies concerning direct georeferenced based UAV photogrammetric platform. The preliminary results in term of positioning accuracy in direct georeferenced mode without using any GCP illustrate horizontal positioning accuracies in x and y axes are both less than 20 meters, respectively. On the contrary, the positioning accuracy of z axis is less than 50 meters with 600 meters flight height above ground. Such accuracy is good for near real time disaster relief. Therefore, it is a relatively safe and cheap platform to collect critical spatial information for urgent response such as disaster relief and assessment applications where ground control points are not available.
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.