41 results on '"Stefan Muckenhuber"'
Search Results
2. Occlusion Model—A Geometric Sensor Modeling Approach for Virtual Testing of ADAS/AD Functions
- Author
-
Simon Genser, Stefan Muckenhuber, Christoph Gaisberger, Sarah Haas, and Timo Haid
- Subjects
Occlusion model ,sensor model ,virtual testing ,ADAS/AD functions ,perception sensor ,Transportation engineering ,TA1001-1280 ,Transportation and communications ,HE1-9990 - Abstract
New advanced driver assistance system/automated driving (ADAS/AD) functions have the potential to significantly enhance the safety of vehicle passengers and road users, while also enabling new transportation applications and potentially reducing CO2 emissions. To achieve the next level of driving automation, i.e., SAE Level-3, physical test drives need to be supplemented by simulations in virtual test environments. A major challenge for today’s virtual test environments is to provide a realistic representation of the vehicle’s perception system (camera, lidar, radar). Therefore, new and improved sensor models are required to perform representative virtual tests that can supplement physical test drives. In this article, we present a computationally efficient, mathematically complete, and geometrically exact generic sensor modeling approach that solves the FOV (field of view) and occlusion task. We also discuss potential extensions, such as bounding-box cropping and sensor-specific, weather-dependent FOV-reduction approaches for camera, lidar, and radar. The performance of the new modeling approach is demonstrated using camera measurements from a test campaign conducted in Hungary in 2020 plus three artificial scenarios (a multi-target scenario with an adjacent truck occluding other road users and two traffic jam situations in which the ego vehicle is either a car or a truck). These scenarios are benchmarked against existing sensor modeling approaches that only exclude objects that are outside the sensor’s maximum detection range or angle. The modeling approach presented can be used as is or provide the basis for a more complex sensor model, as it reduces the number of potentially detectable targets and therefore improves the performance of subsequent simulation steps.
- Published
- 2023
- Full Text
- View/download PDF
3. Angle-dependent spectral reflectance material dataset based on 945 nm time-of-flight camera measurements
- Author
-
David J. Ritter, Relindis Rott, Birgit Schlager, Stefan Muckenhuber, Simon Genser, Martin Kirchengast, and Marcus Hennecke
- Subjects
Lidar ,Reflectance ,Spectral ,NIR ,Infrared ,945nm ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Science (General) ,Q1-390 - Abstract
The main objective of this article is to provide angle-dependent spectral reflectance measurements of various materials in the near infrared spectrum. In contrast to already existing reflectance libraries, e.g., NASA ECOSTRESS and Aster reflectance libraries, which consider only perpendicular reflectance measurements, the presented dataset includes angular resolution of the material reflectance. To conduct the angle-dependent spectral reflectance material measurements, a new measurement device based on a 945 nm time-of-flight camera is used, which was calibrated using Lambertian targets with defined reflectance values at 10, 50, and 95%. The spectral reflectance material measurements are taken for an angle range of 0° to 80° with 10° incremental steps and stored in table format. The developed dataset is categorized with a novel material classification, divided into four different levels of detail considering material properties and distinguishing predominantly between mutually exclusive material classes (level 1) and material types (level 2). The dataset is published open access on the open repository Zenodo with record number 7467552 and version 1.0.1 [1]. Currently, the dataset contains 283 measurements and is continuously extended in new versions on Zenodo.
- Published
- 2023
- Full Text
- View/download PDF
4. Automated snow avalanche monitoring for Austria: State of the art and roadmap for future work
- Author
-
Kathrin Lisa Kapper, Thomas Goelles, Stefan Muckenhuber, Andreas Trügler, Jakob Abermann, Birgit Schlager, Christoph Gaisberger, Markus Eckerstorfer, Jakob Grahn, Eirik Malnes, Alexander Prokop, and Wolfgang Schöner
- Subjects
remote sensing ,synthetic aperture radar ,machine learning ,sentinel-1 ,Austrian alps ,U-net ,Geophysics. Cosmic physics ,QC801-809 ,Meteorology. Climatology ,QC851-999 - Abstract
Avalanches pose a significant threat to the population and infrastructure of mountainous regions. The mapping and documentation of avalanches in Austria is mostly done by experts during field observations and covers usually only specific localized areas. A comprehensive mapping of avalanches is, however, crucial for the work of local avalanche commissions as well as avalanche warning services to assess, e.g., the avalanche danger. Over the past decade, mapping avalanches from satellite imagery has proven to be a promising and rapid approach to monitor avalanche activity in specific regions. Several recent avalanche detection approaches use deep learning-based algorithms to improve detection rates compared to traditional segmentation algorithms. Building on the success of these deep learning-based approaches, we present the first steps to build a modular data pipeline to map historical avalanche cycles in Copernicus Sentinel-1 imagery of the Austrian Alps. The Sentinel-1 mission has provided free all-weather synthetic aperture radar data since 2014, which has proven suitable for avalanche mapping in a Norwegian test area. In addition, we present a roadmap for setting up a segmentation algorithm, in which a general U-Net approach will serve as a baseline and will be compared with the mapping results of additional algorithms initially applied to autonomous driving. We propose to train the U-Net using labeled training dataset of avalanche outlines from Switzerland, Norway and Greenland. Due to the lack of training and validation data from Austria, we plan to compile the first avalanche archive for Austria. Meteorological variables, e.g., precipitation or wind, are highly important for the release of avalanches. In a completely new approach, we will therefore consider weather station data or outputs of numerical weather models in the learning-based algorithm to improve the detection performance. The mapping results in Austria will be complemented with pointwise field measurements of the MOLISENS platform and the RIEGL VZ-6000 terrestrial laser scanner.
- Published
- 2023
- Full Text
- View/download PDF
5. Contaminations on Lidar Sensor Covers: Performance Degradation Including Fault Detection and Modeling as Potential Applications
- Author
-
Birgit Schlager, Thomas Goelles, Stefan Muckenhuber, and Daniel Watzenig
- Subjects
Autonomous vehicles ,fault diagnosis ,measurement errors ,optical sensors ,Transportation engineering ,TA1001-1280 ,Transportation and communications ,HE1-9990 - Abstract
Lidar sensors play an essential role in the perception system of automated vehicles. Fault Detection, Isolation, Identification, and Recovery (FDIIR) systems are essential for increasing the reliability of lidar sensors. Knowing the influence of different faults on lidar data is the first crucial step towards fault detection for lidar sensors in automated vehicles. We investigate the influences of sensor cover contaminations on the output data, i.e., on the lidar point cloud and full waveform. Different contamination types were applied (dew, dirt, artificial dirt, foam, water, and oil) and the influence on the output data of the single beam lidar RIEGL LD05-A20 and the automotive mechanically spinning lidar Ouster OS1-64 was evaluated. The LD05-A20 measurements show that dew, artificial dirt, and foam lead to unwanted reflections at the sensor cover. Dew, artificial dirt over the entire transmitter, and foam measurements lead to severe faults, i.e., complete sensor blindness. The OS1-64 measurements also show that dew can lead to almost complete sensor blindness. The results look promising for further studies on fault detection and isolation, since the different contamination types lead to different symptom combinations.
- Published
- 2022
- Full Text
- View/download PDF
6. Automotive Lidar and Vibration: Resonance, Inertial Measurement Unit, and Effects on the Point Cloud
- Author
-
Birgit Schlager, Thomas Goelles, Marco Behmer, Stefan Muckenhuber, Johann Payer, and Daniel Watzenig
- Subjects
Automated vehicle ,measurement errors ,vibration measurement ,vision sensors for intelligent vehicles ,Transportation engineering ,TA1001-1280 ,Transportation and communications ,HE1-9990 - Abstract
Lidar is an important component of the perception suite for automated systems. The effects of vibration on lidar point clouds are mostly unknown, despite the lidar’s wide adaption and usual application under conditions where vibration occurs frequently. In this study, we performed controlled vibration tests from 6 to 2000 Hz at 9 and 12 m/s2 in vertical direction on the automotive lidar OS1-64 by Ouster. An information loss emerged which is mostly independent from frequency and acceleration. The loss of points is randomly distributed and does not correlate with range, intensity, or ring number (the horizontal line of the rotating lidar unit). The resonance frequency of 1426 Hz proved to be unproblematic as no pronounced negative effects on the point cloud could be identified. For vibration detection, the internal Inertial Measurement Unit (IMU) of the OS1-64 is accurate and sufficient for vibrations up to 50 Hz. Above 50 Hz, external IMUs would be required for vibration detection. Counting the number of points on a target close to the edges was investigated as an exemplary way to detect vibration purely based on the point cloud, i.e., independent of the lidar’s IMU.
- Published
- 2022
- Full Text
- View/download PDF
7. Sea ice drift data for Fram Strait derived from a feature-tracking algorithm applied on Sentinel-1 SAR imagery
- Author
-
Stefan Muckenhuber and Stein Sandven
- Subjects
Computer applications to medicine. Medical informatics ,R858-859.7 ,Science (General) ,Q1-390 - Abstract
1541 Sentinel-1 SAR images, acquired over Fram Strait between 2014 and 2016, were considered for sea ice drift retrieval using an open-source feature tracking algorithm. Based on this SAR image data set, 2026 and 3120 image pairs in HH and HV polarisation were used to calculate monthly mean sea ice velocities at 79 N.
- Published
- 2018
- Full Text
- View/download PDF
8. GPS data from sea ice trackers deployed in Fram Strait in 2016
- Author
-
Stefan Muckenhuber and Hanne Sagen
- Subjects
Computer applications to medicine. Medical informatics ,R858-859.7 ,Science (General) ,Q1-390 - Abstract
Three GPS trackers have been deployed on sea ice in Fram Strait collecting GPS positions between 7th July 2016 until 10th September 2016 with an interval between 5 and 30 min. For an easy understanding and usage, corresponding satellite images and python scripts are added to the GPS data.
- Published
- 2018
- Full Text
- View/download PDF
9. Development and Experimental Validation of an Intelligent Camera Model for Automated Driving
- Author
-
Simon Genser, Stefan Muckenhuber, Selim Solmaz, and Jakob Reckenzaun
- Subjects
automotive perception sensors ,sensor model ,virtual testing ,ADAS/AD function ,automotive camera ,Chemical technology ,TP1-1185 - Abstract
The virtual testing and validation of advanced driver assistance system and automated driving (ADAS/AD) functions require efficient and realistic perception sensor models. In particular, the limitations and measurement errors of real perception sensors need to be simulated realistically in order to generate useful sensor data for the ADAS/AD function under test. In this paper, a novel sensor modeling approach for automotive perception sensors is introduced. The novel approach combines kernel density estimation with regression modeling and puts the main focus on the position measurement errors. The modeling approach is designed for any automotive perception sensor that provides position estimations at the object level. To demonstrate and evaluate the new approach, a common state-of-the-art automotive camera (Mobileye 630) was considered. Both sensor measurements (Mobileye position estimations) and ground-truth data (DGPS positions of all attending vehicles) were collected during a large measurement campaign on a Hungarian highway to support the development and experimental validation of the new approach. The quality of the model was tested and compared to reference measurements, leading to a pointwise position error of 9.60% in the lateral and 1.57% in the longitudinal direction. Additionally, the modeling of the natural scattering of the sensor model output was satisfying. In particular, the deviations of the position measurements were well modeled with this approach.
- Published
- 2021
- Full Text
- View/download PDF
10. Configurable Sensor Model Architecture for the Development of Automated Driving Systems
- Author
-
Simon Schmidt, Birgit Schlager, Stefan Muckenhuber, and Rainer Stark
- Subjects
sensor model architecture ,configurable sensor model ,sensor effects ,automated driving ,virtual testing ,functional decomposition ,Chemical technology ,TP1-1185 - Abstract
Sensor models provide the required environmental perception information for the development and testing of automated driving systems in virtual vehicle environments. In this article, a configurable sensor model architecture is introduced. Based on methods of model-based systems engineering (MBSE) and functional decomposition, this approach supports a flexible and continuous way to use sensor models in automotive development. Modeled sensor effects, representing single-sensor properties, are combined to an overall sensor behavior. This improves reusability and enables adaptation to specific requirements of the development. Finally, a first practical application of the configurable sensor model architecture is demonstrated, using two exemplary sensor effects: the geometric field of view (FoV) and the object-dependent FoV.
- Published
- 2021
- Full Text
- View/download PDF
11. Fault Detection, Isolation, Identification and Recovery (FDIIR) Methods for Automotive Perception Sensors Including a Detailed Literature Survey for Lidar
- Author
-
Thomas Goelles, Birgit Schlager, and Stefan Muckenhuber
- Subjects
automotive ,perception sensor ,lidar ,fault detection ,fault isolation ,fault identification ,Chemical technology ,TP1-1185 - Abstract
Perception sensors such as camera, radar, and lidar have gained considerable popularity in the automotive industry in recent years. In order to reach the next step towards automated driving it is necessary to implement fault diagnosis systems together with suitable mitigation solutions in automotive perception sensors. This is a crucial prerequisite, since the quality of an automated driving function strongly depends on the reliability of the perception data, especially under adverse conditions. This publication presents a systematic review on faults and suitable detection and recovery methods for automotive perception sensors and suggests a corresponding classification schema. A systematic literature analysis has been performed with focus on lidar in order to review the state-of-the-art and identify promising research opportunities. Faults related to adverse weather conditions have been studied the most, but often without providing suitable recovery methods. Issues related to sensor attachment and mechanical damage of the sensor cover were studied very little and provide opportunities for future research. Algorithms, which use the data stream of a single sensor, proofed to be a viable solution for both fault detection and recovery.
- Published
- 2020
- Full Text
- View/download PDF
12. Performance evaluation of a state-of-the-art automotive radar and corresponding modeling approaches based on a large labeled dataset.
- Author
-
Stefan Muckenhuber, Eniz Museljic, and Georg Stettinger
- Published
- 2022
- Full Text
- View/download PDF
13. Object-based sensor model for virtual testing of ADAS/AD functions.
- Author
-
Stefan Muckenhuber, Hannes Holzer, Jonas Rübsam, and Georg Stettinger
- Published
- 2019
- Full Text
- View/download PDF
14. pointcloudset: Efficient Analysis of Large Datasets of Point Clouds Recorded Over Time.
- Author
-
Thomas Goelles, Birgit Schlager, Stefan Muckenhuber, Sarah Haas, and Tobias Hammer
- Published
- 2021
- Full Text
- View/download PDF
15. The potential of automotive perception sensors for local snow avalanche monitoring
- Author
-
Stefan Muckenhuber, Thomas Goelles, Birgit Schlager, Kathrin Lisa Kapper, Alexander Prokop, and Wolfgang Schöner
- Abstract
Monitoring of local snow avalanche releases are indispensable for many use cases. Existing lidar and radar technologies for monitoring local avalanche activity are costly and require closed source commercial software. These systems are often inflexible for exploring new use cases and too expensive for large scale applications, e.g., 100-1000 slopes. Therefore, developing reliable and inexpensive measurement and monitoring techniques with cutting- edge lidar and radar technology are highly required. Today, the automotive industry is a leading technology driver for lidar and radar sensors, because the largest challenge for achieving the next level of vehicle automation is to improve the reliability of its perception system. Automotive lidar sensors record high-resolution point clouds with very high acquisition frequencies of 10-20Hz and a range of up to 400m. High costs of mechanically spinning lidars (5-20kEUR) are still a limiting factor, but prices have already dropped significantly during the last decade and are expected to drop by another order of magnitude in the upcoming years. Modern automotive radar sensors operate at 24GHz and 77GHz, have a range of up to 300m, and provide raw data formats that allow the development of algorithms for detecting changes in the backscatter caused by avalanches. To exploit the potential of these newly emerging, cost- effective technologies for geoscientific applications, a stand-alone, modular sensor system called MOLISENS (MObile LIdar SENsor System) was developed in a cooperation between Virtual Vehicle Research Center and University of Graz. MOLISENS allows the modular incorporation of cutting-edge radar and lidar sensors. The open-source python package ‘pointcloudset’ was developed for handling, analyzing, and visualizing large datasets that consist of multiple point clouds recorded over time. This python package is designed to enable the development of new point cloud algorithms, and it is planned to extend the functionality to radar cluster data. Based on MOLISENS and pointcloudset, a strategy for their operational use in local avalanche monitoring is being developed.
- Published
- 2023
- Full Text
- View/download PDF
16. pointcloudset - A Python package to analyze large datasets of point clouds recorded over time
- Author
-
Thomas Goelles, Birgit Schlager, and Stefan Muckenhuber
- Abstract
Point clouds can be acquired by different sensor types and methods, such as lidar (light detection and ranging), radar (radio detection and ranging), RGB-D (red, green, blue, depth) cameras, SfM (structure from motion), etc. In many cases multiple point clouds are recorded over time, sometimes also referred to as 4D point clouds. For example, automotive lidars from Ouster or Velodyne record point clouds at around 10-20Hz resulting in millions of points per second. In addition, monitoring with terrestrial laser scanners is becoming used more often. Producing similar datasets than the automotive lidars, although with larger individual point clouds at a lower frame rate.Analyzing such a large collection of point clouds is a big challenge due the size and unstructured nature of the data. The Python package "pointcloudset" provides a way to store, analyze, and visualize large datasets consisting of multiple point clouds recorded over time. Pointcloudset features lazy evaluation, parallel processing and is designed to enable the development of new point cloud algorithms and their application on big datasets. The package is based on the Python packages pandas, pytncloud, dask and open3D. Its API is easy to use and high level and the package is open source and available on GitHub.
- Published
- 2023
- Full Text
- View/download PDF
17. Next steps to a modular machine learning-based data pipeline for automated snow avalanche detection in the Austrian Alps
- Author
-
Kathrin Lisa Kapper, Thomas Goelles, Stefan Muckenhuber, Andreas Trügler, Jakob Abermann, Birgit Schlager, Christoph Gaisberger, Jakob Grahn, Eirik Malnes, Alexander Prokop, and Wolfgang Schöner
- Abstract
Snow avalanches pose a significant danger to the population and infrastructure in the Austrian Alps. Although rigorous prevention and mitigation mechanisms are in place in Austria, accidents cannot be prevented, and victims are mourned every year. A comprehensive mapping of avalanches would be desirable to support the work of local avalanche commissions to improve future avalanche predictions. In recent years, mapping of avalanches from satellite images has been proven to be a promising and fast approach to monitor the avalanche activity. The Copernicus Sentinel-1 mission provides weather independent synthetic aperture radar data, free of charge since 2014, that has been shown to be suitable for avalanche mapping in a test region in Norway. Several recent approaches of avalanche detection make use of deep learning-based algorithms to improve the detection rate compared to conventional segmentation algorithms. Building upon the success of these deep learning-based approaches, we are setting up a modular data pipeline to map previous avalanche cycles in Sentinel-1 imagery in the Austrian Alps. As segmentation algorithm we make use of a common U-Net approach as a baseline and compare it to mapping results from an additional algorithm that has originally been applied to an autonomous driving problem. As a first test case, the extensive labelled training dataset of around 25 000 avalanche outlines from Switzerland will be used to train the U-Net; further test cases will include the training dataset of around 3 000 avalanches in Norway and around 800 avalanches in Greenland. To obtain training data of avalanches in Austria we tested an approach by manually mapping avalanches from Sentinel-2 satellite imagery and aerial photos. In a new approach, we will introduce high-resolution weather data, e.g., weather station data, to the learning-based algorithm to improve the detection performance. The avalanches detected with the algorithm will be quantitatively evaluated against held-out test sets and ground-truth data where available. Detection results in Austria will additionally be validated with in situ measurements from the MOLISENS lidar system and the RIEGL VZ-6000 laser scanner. Moreover, we will assess the possibilities of learning-based approaches in the context of avalanche forecasting.
- Published
- 2023
- Full Text
- View/download PDF
18. Automotive Lidar Modelling Approach Based on Material Properties and Lidar Capabilities.
- Author
-
Stefan Muckenhuber, Hannes Holzer, and Zrinka Bockaj
- Published
- 2020
- Full Text
- View/download PDF
19. Performance evaluation of a state-of-the-art automotive radar and corresponding modeling approaches based on a large labeled dataset
- Author
-
Eniz Museljic, Georg Stettinger, and Stefan Muckenhuber
- Subjects
Computer science ,Applied Mathematics ,Real-time computing ,Aerospace Engineering ,Sensor model ,Computer Science Applications ,law.invention ,Control and Systems Engineering ,law ,Automotive radar ,Automotive Engineering ,Key (cryptography) ,Virtual test ,Advanced driver ,State (computer science) ,Radar ,Software ,Information Systems - Abstract
Radar is a key sensor to achieve a reliable environment perception for advanced driver assistance system and automated driving (ADAS/AD) functions. Reducing the development efforts for ADAS functio...
- Published
- 2021
- Full Text
- View/download PDF
20. MOLISENS: MObile LIdar SENsor System to exploit the potential of small industrial lidar devices for geoscientific applications
- Author
-
Thomas Goelles, Tobias Hammer, Stefan Muckenhuber, Birgit Schlager, Jakob Abermann, Christian Bauer, Víctor J. Expósito Jiménez, Wolfgang Schöner, Markus Schratter, Benjamin Schrei, and Kim Senger
- Subjects
Atmospheric Science ,Geology ,Oceanography - Abstract
We propose a newly developed modular MObile LIdar SENsor System (MOLISENS) to enable new applications for small industrial lidar (light detection and ranging) sensors. The stand-alone modular setup supports both monitoring of dynamic processes and mobile mapping applications based on SLAM (Simultaneous Localization and Mapping) algorithms. The main objective of MOLISENS is to exploit newly emerging perception sensor technologies developed for the automotive industry for geoscientific applications. However, MOLISENS can also be used for other application areas, such as 3D mapping of buildings or vehicle-independent data collection for sensor performance assessment and sensor modeling. Compared to TLSs, small industrial lidar sensors provide advantages in terms of size (on the order of 10 cm), weight (on the order of 1 kg or less), price (typically between EUR 5000 and 10 000), robustness (typical protection class of IP68), frame rates (typically 10–20 Hz), and eye safety class (typically 1). For these reasons, small industrial lidar systems can provide a very useful complement to currently used TLS (terrestrial laser scanner) systems that have their strengths in range and accuracy performance. The MOLISENS hardware setup consists of a sensor unit, a data logger, and a battery pack to support stand-alone and mobile applications. The sensor unit includes the small industrial lidar Ouster OS1-64 Gen1, a ublox multi-band active GNSS (Global Navigation Satellite System) with the possibility for RTK (real-time kinematic), and a nine-axis Xsens IMU (inertial measurement unit). Special emphasis was put on the robustness of the individual components of MOLISENS to support operations in rough field and adverse weather conditions. The sensor unit has a standard tripod thread for easy mounting on various platforms. The current setup of MOLISENS has a horizontal field of view of 360∘, a vertical field of view with a 45∘ opening angle, a range of 120 m, a spatial resolution of a few centimeters, and a temporal resolution of 10–20 Hz. To evaluate the performance of MOLISENS, we present a comparison between the integrated small industrial lidar Ouster OS1-64 and the state-of-the-art high-accuracy and high-precision TLS Riegl VZ-6000 in a set of controlled experimental setups. We then apply the small industrial lidar Ouster OS1-64 in several real-world settings. The mobile mapping application of MOLISENS has been tested under various conditions, and results are shown from two surveys in the Lurgrotte cave system in Austria and a glacier cave in Longyearbreen on Svalbard.
- Published
- 2022
21. Reply on RC2
- Author
-
Stefan Muckenhuber
- Published
- 2022
- Full Text
- View/download PDF
22. Reply on EC1
- Author
-
Stefan Muckenhuber
- Published
- 2022
- Full Text
- View/download PDF
23. State-of-the-Art Sensor Models for Virtual Testing of Advanced Driver Assistance Systems/Autonomous Driving Functions
- Author
-
Stefan Muckenhuber, Martin Kirchengast, Relindis Rott, Daniel Watzenig, Hannes Holzer, Kmeid Saad, Franz Michael Maier, Birgit Schlager, Georg Stettinger, Simon Schmidt, and Jonas Ruebsam
- Subjects
Artificial Intelligence ,Control and Systems Engineering ,Computer science ,Driver support systems ,Automotive Engineering ,Virtual test ,Advanced driver assistance systems ,General Medicine ,State (computer science) ,Virtual reality ,Simulation ,Computer Science Applications - Published
- 2020
- Full Text
- View/download PDF
24. The potential of automated snow avalanche detection from SAR images for the Austrian Alpine region using a learning-based approach
- Author
-
Kathrin Lisa Kapper, Stefan Muckenhuber, Thomas Goelles, Andreas Trügler, Muhamed Kuric, Jakob Abermann, Jakob Grahn, Eirik Malnes, and Wolfgang Schöner
- Abstract
Each year, snow avalanches cause many casualties and tremendous damage to infrastructure. Prevention and mitigation mechanisms for avalanches are established for specific regions only. However, the full extent of the overall avalanche activity is usually barely known as avalanches occur in remote areas making in-situ observations scarce. To overcome these challenges, an automated avalanche detection approach using the Copernicus Sentinel-1 synthetic aperture radar (SAR) data has recently been introduced for some test regions in Norway. This automated detection approach from SAR images is faster and gives more comprehensive results than field-based detection provided by avalanche experts. The Sentinel-1 programme has provided - and continues to provide - free of charge, weather-independent, and high-resolution satellite Earth observations since its start in 2014. Recent advances in avalanche detection use deep learning algorithms to improve the detection rates. Consequently, the performance potential and the availability of reliable training data make learning-based approaches an appealing option for avalanche detection. In the framework of the exploratory project SnowAV_AT, we intend to build the basis for a state-of-the-art automated avalanche detection system for the Austrian Alps, including a "best practice" data processing pipeline and a learning-based approach applied to Sentinel-1 SAR images. As a first step towards this goal, we have compiled several labelled training datasets of previously detected avalanches that can be used for learning. Concretely, these datasets contain 19000 avalanches that occurred during a large event in Switzerland in January 2018, around 6000 avalanches that occurred in Switzerland in January 2019, and around 800 avalanches that occurred in Greenland in April 2016. The avalanche detection performance of our learning-based approach will be quantitatively evaluated against held-out test sets. Furthermore, we will provide qualitative evaluations using SAR images of the Austrian Alps to gauge how well our approach generalizes to unseen data that is potentially differently distributed than the training data. In addition, selected ground truth data from Switzerland, Greenland and Austria will allow us to validate the accuracy of the detection approach. As a particular novelty of our work, we will try to leverage high-resolution weather data and combine it with SAR images to improve the detection performance. Moreover, we will assess the possibilities of learning-based approaches in the context of the arguably more challenging avalanche forecasting problem.
- Published
- 2022
- Full Text
- View/download PDF
25. Weather history encoding for machine learning-based snow avalanche detection
- Author
-
Thomas Gölles, Kathrin Lisa Kapper, Stefan Muckenhuber, and Andreas Trügler
- Abstract
Since its start in 2014, the Copernicus Sentinel-1 programme has provided free of charge, weather independent, and high-resolution satellite Earth observations and has set major scientific advances in the detection of snow avalanches from satellite imagery in motion. Recently, operational avalanche detection from Sentinel-1 synthetic Aperture radar (SAR) images were successfully introduced for some test regions in Norway. However, current state of the art avalanche detection algorithms based on machine learning do not include weather history. We propose a novel way to encode weather data and include it into an automatic avalanche detection pipeline for the Austrian Alps. The approach consists of four steps. At first the raw data in netCDF format is downloaded, which consists of several meteorological parameters over several time steps. In the second step the weather data is downscaled onto the pixel locations of the SAR image. Then the data is aggregated over time, which produces a two-dimensional grid of one value per SAR pixel at the time when the SAR data was recorded. This aggregation function can range from simple averages to full snowpack models. In the final step, the grid is then converted to an image with greyscale values corresponding to the aggregated values. The resulting image is then ready to be fed into the machine learning pipeline. We will include this encoded weather history data to increase the avalanche detection performance, and investigate contributing factors with model interpretability tools and explainable artificial intelligence.
- Published
- 2022
- Full Text
- View/download PDF
26. Automotive lidar in the Arctic: 3D monitoring and mapping
- Author
-
Birgit Schlager, Thomas Goelles, Stefan Muckenhuber, Tobias Hammer, Kim Senger, Rüdiger Engel, Christian Bobrich, and Daniel Watzenig
- Abstract
We enable exciting and novel mapping and monitoring use cases for automotive lidar technologies in the Arctic. Originally, these lidar technologies were developed for enabling environment perception of automated vehicles with high spatial resolution and accuracy. Therefore, these lidar sensors have several advantages for mobile mapping applications in the Arctic compared to commonly used technologies like time-lapse cameras and satellite or aerial photogrammetry that suffer from lower accuracy of 3-dimensional (3D) data than the proposed automotive lidar sensors. At present, terrestrial laser scanners (TLS), like the Riegl VZ-6000, are commonly used in the Arctic. However, especially for mobile use cases, the automotive lidar provides a lot of advantages compared to TLS, for instance lower cost, more robust, smaller, and lighter and thus more portable. Therefore, automotive lidar sensors open the door for new mobile mapping and monitoring applications in the Arctic.The data acquisition hardware consists of a sensor unit, a data logger, and batteries. The sensor unit integrates an automotive lidar, the Ouster OS1-64 Gen1, a ublox multi-band active global navigation satellite system (GNSS) antenna, and a Xsens 9-axis inertial measurement unit (IMU) with a gyroscope, an accelerometer, and a magnetometer. Furthermore, a long-term evolution (LTE) stick is integrated for retrieving real time kinematic (RTK) data. In a post-processing step, collected point clouds and IMU data can be used by a simultaneous localization and mapping (SLAM) algorithm for point cloud stitching with one big point cloud and the trajectory of the mapping sensor as a result, i.e., a map of the scanned environment. Optionally, the differential global positioning system (DGPS) data can be used additionally by the SLAM algorithm. The setup can be mounted in multiple ways to support a wide variety of new applications, e.g., on a handle, car, ship, or snowmobile.We used the introduced setup for several applications and successfully mapped glacier caves and surrounding glacier surfaces on Longyearbreen and Larsbreen in Svalbard as one example of a novel Arctic use case. Furthermore, we showed that the setup is working on a ship scanning a harbor in Croatia. In this measurement campaign, we used a multi-beam sonar from Furuno in addition to our mapping setup which made it possible to map the coast above and below the water surface.Therefore, we suggest several new applications of automotive lidar sensors in the Arctic, e.g., monitoring coastal erosions due to permafrost thawing and mapping glacier fronts. In this way, accurate outlines and structures of coasts and calving glacier fronts can be generated. Such data will be relevant for future development of glacier calving models. Furthermore, the setup can be used for monitoring glacier fronts over a period of several years. Further research may also include merging the gained 3D map with photogrammetry data to generate highly accurate 3D models of a glacier front with textural details. Another novel Arctic use case could be time-lapse scans of infrastructure, e.g., runway, roads, or cultural heritage, that is affected by the thawing permafrost to track its changes and movements cost-effectively.
- Published
- 2022
- Full Text
- View/download PDF
27. MOLISENS: a modular MObile LIdar SENsor System to exploit the potential of automotive lidar for geoscientific applications
- Author
-
Thomas Goelles, Tobias Hammer, Stefan Muckenhuber, Birgit Schlager, Jakob Abermann, Christian Bauer, Víctor J. Expósito Jiménez, Wolfgang Schöner, Markus Schratter, Benjamin Schrei, and Kim Senger
- Abstract
We propose a newly developed modular MObile LIdar SENsor System (MOLISENS) to enable new applications for automotive light detection and ranging (lidar) sensors independent of a complete vehicle setup. The stand-alone, modular setup supports both monitoring of dynamic processes and mobile mapping applications based on Simultaneous Localization and Mapping (SLAM) algorithms. The main objective of MOLISENS is to exploit newly emerging perception sensor technologies developed for the automotive industry for geoscientific applications. However, MOLISENS can also be used for other appli- cation areas, such as 3D mapping of buildings or vehicle independent data collection for sensor performance assessment and sensor modeling. Compared to Terrestrial Laser Scanners (TLSs), automotive lidar sensors provide advantages in terms of size (in the order of 10 cm), weight (in the order of 1 kg or less), price (typically between 5,000 EUR and 10,000 EUR), robustness (typical protection class of IP68), frame rates (typically 10 Hz–20 Hz), and eye safety of class (typically 1). For these reasons, automotive lidar systems can provide a very useful complement to currently used TLS systems that have their strengths in range and accuracy performance. The MOLISENS hardware setup consists of a sensor unit, a data logger, and a battery pack to support stand-alone and mobile applications. The sensor unit includes the automotive lidar Ouster OS1-64 Gen1, a ublox multi-band active Global Navigation Satellite System (GNSS) with the possibility for Real-Time Kinematic (RTK), and a 9-axis Xsens Inertial Measurement Unit (IMU). Special emphasis was put on the robustness of the individual components of MOLISENS to support operations in rough field and adverse weather conditions. The sensor unit has a standard screw for easy mounting on various platforms. The current setup of MOLISENS has a horizontal field of view of 360°, a vertical field of view with 45° opening angle, a range of 120 m, a spatial resolution of a few cm, and a temporal resolution of 10 Hz–20 Hz. To evaluate the performance of MOLISENS, we present a comparison between the integrated automotive lidar Ouster OS1-64 and the state of the art TLS RIEGL VZ-6000. The mobile mapping application of MOLISENS has been tested under various conditions and results are shown from two surveys in the Lurgrotte cave system in Austria and a glacier cave in Longyearbreen on Svalbard.
- Published
- 2022
28. Configurable Sensor Model Architecture for the Development of Automated Driving Systems
- Author
-
Stefan Muckenhuber, Simon Schmidt, Birgit Schlager, and Rainer Stark
- Subjects
Automobile Driving ,Computer science ,functional decomposition ,0211 other engineering and technologies ,Automotive industry ,Field of view ,02 engineering and technology ,TP1-1185 ,Functional decomposition ,Biochemistry ,Article ,Analytical Chemistry ,Development (topology) ,sensor model architecture ,0502 economics and business ,021108 energy ,Electrical and Electronic Engineering ,Architecture ,Adaptation (computer science) ,Instrumentation ,Reusability ,050210 logistics & transportation ,business.industry ,Chemical technology ,virtual testing ,05 social sciences ,Sensor model ,Atomic and Molecular Physics, and Optics ,Embedded system ,automated driving ,configurable sensor model ,sensor effects ,business ,ddc:600 ,model reusability - Abstract
Sensor models provide the required environmental perception information for the development and testing of automated driving systems in virtual vehicle environments. In this article, a configurable sensor model architecture is introduced. Based on methods of model-based systems engineering (MBSE) and functional decomposition, this approach supports a flexible and continuous way to use sensor models in automotive development. Modeled sensor effects, representing single-sensor properties, are combined to an overall sensor behavior. This improves reusability and enables adaptation to specific requirements of the development. Finally, a first practical application of the configurable sensor model architecture is demonstrated, using two exemplary sensor effects: the geometric field of view (FoV) and the object-dependent FoV.
- Published
- 2021
- Full Text
- View/download PDF
29. Sensors for Automated Driving
- Author
-
Stefan Muckenhuber, Kenan Softic, Anton Fuchs, Georg Stettinger, and Daniel Watzenig
- Subjects
Modality (human–computer interaction) ,business.industry ,Computer science ,Real-time computing ,Automotive industry ,Sensor fusion ,law.invention ,Lidar ,Robustness (computer science) ,law ,GNSS applications ,Component (UML) ,Radar ,business - Abstract
A sensor system capable of supporting automated driving functions needs to provide both reliable localization of the vehicle and robust environment perception of the vehicle's surrounding. The following chapter introduces the working principles and the state of the art of automotive sensors for localization (GNSS and INS) and environment perception (camera, radar and LIDAR), corresponding sensormodels and sensor fusion techniques. Sensor models will allow for the replacement of conventional test drives and physical component tests by using simulations in virtual test environments to meet the increasing requirements of automated vehicles with respect to development costs, time and safety. Considering the multitude and complexity of possible environmental conditions, realistic simulation of perception sensors is a particularly demanding topic. To increase the performance of a sensor system, compensate for limitations of each sensor modality, and increase the overall robustness of the system, sensor fusion techniques are an important subject in automotive research.
- Published
- 2020
- Full Text
- View/download PDF
30. Fault Detection, Isolation, Identification and Recovery (FDIIR) Methods for Automotive Perception Sensors Including a Detailed Literature Survey for Lidar
- Author
-
Stefan Muckenhuber, Birgit Schlager, and Thomas Goelles
- Subjects
Data stream ,Computer science ,media_common.quotation_subject ,Automotive industry ,Review ,02 engineering and technology ,perception sensor ,fault isolation ,fault identification ,lcsh:Chemical technology ,Biochemistry ,Fault detection and isolation ,Analytical Chemistry ,law.invention ,law ,Perception ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:TP1-1185 ,Electrical and Electronic Engineering ,Radar ,Instrumentation ,lidar ,media_common ,business.industry ,fault recovery ,020206 networking & telecommunications ,fault diagnosis ,Atomic and Molecular Physics, and Optics ,fault detection ,Reliability engineering ,Lidar ,fault detection and isolation (FDIR) ,020201 artificial intelligence & image processing ,automotive ,Literature survey ,business - Abstract
Perception sensors such as camera, radar, and lidar have gained considerable popularity in the automotive industry in recent years. In order to reach the next step towards automated driving it is necessary to implement fault diagnosis systems together with suitable mitigation solutions in automotive perception sensors. This is a crucial prerequisite, since the quality of an automated driving function strongly depends on the reliability of the perception data, especially under adverse conditions. This publication presents a systematic review on faults and suitable detection and recovery methods for automotive perception sensors and suggests a corresponding classification schema. A systematic literature analysis has been performed with focus on lidar in order to review the state-of-the-art and identify promising research opportunities. Faults related to adverse weather conditions have been studied the most, but often without providing suitable recovery methods. Issues related to sensor attachment and mechanical damage of the sensor cover were studied very little and provide opportunities for future research. Algorithms, which use the data stream of a single sensor, proofed to be a viable solution for both fault detection and recovery.
- Published
- 2020
31. Sea ice drift data for Fram Strait derived from a feature-tracking algorithm applied on Sentinel-1 SAR imagery
- Author
-
Stein Sandven and Stefan Muckenhuber
- Subjects
geography ,Multidisciplinary ,geography.geographical_feature_category ,010504 meteorology & atmospheric sciences ,02 engineering and technology ,lcsh:Computer applications to medicine. Medical informatics ,01 natural sciences ,Image (mathematics) ,Earth and Planetary Sciences ,Data set ,0202 electrical engineering, electronic engineering, information engineering ,Sea ice ,Feature tracking ,lcsh:R858-859.7 ,020201 artificial intelligence & image processing ,lcsh:Science (General) ,Algorithm ,Geology ,0105 earth and related environmental sciences ,lcsh:Q1-390 - Abstract
1541 Sentinel-1 SAR images, acquired over Fram Strait between 2014 and 2016, were considered for sea ice drift retrieval using an open-source feature tracking algorithm. Based on this SAR image data set, 2026 and 3120 image pairs in HH and HV polarisation were used to calculate monthly mean sea ice velocities at 79 N.
- Published
- 2018
32. Open-source sea ice drift algorithm for Sentinel-1 SAR imagery using a combination of feature tracking and pattern matching
- Author
-
Stein Sandven and Stefan Muckenhuber
- Subjects
lcsh:GE1-350 ,geography ,geography.geographical_feature_category ,010504 meteorology & atmospheric sciences ,Buoy ,Computer science ,lcsh:QE1-996.5 ,0211 other engineering and technologies ,02 engineering and technology ,01 natural sciences ,Displacement (vector) ,lcsh:Geology ,Feature (computer vision) ,Position (vector) ,Sea ice ,Matematikk og Naturvitenskap: 400::Geofag: 450::Petroleumsgeologi og -geofysikk: 464 [VDP] ,Pattern matching ,Algorithm ,Rotation (mathematics) ,Independence (probability theory) ,lcsh:Environmental sciences ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Earth-Surface Processes ,Water Science and Technology - Abstract
An open-source sea ice drift algorithm for Sentinel-1 SAR imagery is introduced based on the combination of feature tracking and pattern matching. Feature tracking produces an initial drift estimate and limits the search area for the consecutive pattern matching, which provides small- to medium-scale drift adjustments and normalised cross-correlation values. The algorithm is designed to combine the two approaches in order to benefit from the respective advantages. The considered feature-tracking method allows for an efficient computation of the drift field and the resulting vectors show a high degree of independence in terms of position, length, direction and rotation. The considered pattern-matching method, on the other hand, allows better control over vector positioning and resolution. The preprocessing of the Sentinel-1 data has been adjusted to retrieve a feature distribution that depends less on SAR backscatter peak values. Applying the algorithm with the recommended parameter setting, sea ice drift retrieval with a vector spacing of 4 km on Sentinel-1 images covering 400 km × 400 km, takes about 4 min on a standard 2.7 GHz processor with 8 GB memory. The corresponding recommended patch size for the pattern-matching step that defines the final resolution of each drift vector is 34 × 34 pixels (2.7 × 2.7 km). To assess the potential performance after finding suitable search restrictions, calculated drift results from 246 Sentinel-1 image pairs have been compared to buoy GPS data, collected in 2015 between 15 January and 22 April and covering an area from 80.5 to 83.5° N and 12 to 27° E. We found a logarithmic normal distribution of the displacement difference with a median at 352.9 m using HV polarisation and 535.7 m using HH polarisation. All software requirements necessary for applying the presented sea ice drift algorithm are open-source to ensure free implementation and easy distribution.
- Published
- 2017
33. Object-based sensor model for virtual testing of ADAS/AD functions
- Author
-
Georg Stettinger, Jonas Rubsam, Stefan Muckenhuber, and Hannes Holzer
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,Interface (computing) ,Probabilistic logic ,020206 networking & telecommunications ,Advanced driver assistance systems ,02 engineering and technology ,computer.software_genre ,Object (computer science) ,Data set ,Set (abstract data type) ,020901 industrial engineering & automation ,Virtual machine ,0202 electrical engineering, electronic engineering, information engineering ,Object type ,Computer vision ,Artificial intelligence ,business ,computer - Abstract
A novel generic sensor model is introduced for testing and validation of ADAS/AD (Advanced Driver Assistance Systems/Autonomous Drive) functions. The model converts an incoming object list into a sensor specific object list and is suitable for all ADAS/AD perception sensors operating on object level. The field of view of the sensor is represented by a two dimensional polygon, that can be defined by a set of points. A simple ray-tracing method is applied to simulate coverage by objects. The model allows multiple range specifications for a single sensor depending on object type and classification capabilities. A look-up table is used to convert the object class definitions of the virtual environment into the class definitions of the considered sensor. Additional object parameters that are detected by the sensor may be included. False negative and false positive detections are generated by probabilistic functions. The parametrisation procedure of the sensor model is explained and depicted in an example using the data sheet of the Radar Continental ARS404. The model's capability to simulate a complete sensor set is demonstrated with the sensor set of the Renault Zoe that was used to collect the nuScenes data set. The sensor model is integrated into a virtual test-bed using Vires VTD and Open Simulation Interface.
- Published
- 2019
- Full Text
- View/download PDF
34. Operational algorithm for ice–water classification on dual-polarized RADARSAT-2 images
- Author
-
Mohamed Babiker, Stefan Muckenhuber, Stein Sandven, Anton Korosov, and Natalia Zakhvatkina
- Subjects
lcsh:GE1-350 ,Synthetic aperture radar ,geography ,geography.geographical_feature_category ,010504 meteorology & atmospheric sciences ,Contextual image classification ,lcsh:QE1-996.5 ,0211 other engineering and technologies ,02 engineering and technology ,01 natural sciences ,Ice water ,Dual polarized ,lcsh:Geology ,Support vector machine ,Open water ,Sea ice ,Matematikk og Naturvitenskap: 400::Geofag: 450::Petroleumsgeologi og -geofysikk: 464 [VDP] ,Algorithm ,lcsh:Environmental sciences ,Geology ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Earth-Surface Processes ,Water Science and Technology ,Remote sensing - Abstract
Synthetic Aperture Radar (SAR) data from RADARSAT-2 (RS2) in dual-polarization mode provide additional information for discriminating sea ice and open water compared to single-polarization data. We have developed an automatic algorithm based on dual-polarized RS2 SAR images to distinguish open water (rough and calm) and sea ice. Several technical issues inherent in RS2 data were solved in the pre-processing stage, including thermal noise reduction in HV polarization and correction of angular backscatter dependency in HH polarization. Texture features were explored and used in addition to supervised image classification based on the support vector machines (SVM) approach. The study was conducted in the ice-covered area between Greenland and Franz Josef Land. The algorithm has been trained using 24 RS2 scenes acquired in winter months in 2011 and 2012, and the results were validated against manually derived ice charts of the Norwegian Meteorological Institute. The algorithm was applied on a total of 2705 RS2 scenes obtained from 2013 to 2015, and the validation results showed that the average classification accuracy was 91 ± 4 %.
- Published
- 2018
35. Response to Referee 3
- Author
-
Stefan Muckenhuber
- Published
- 2017
- Full Text
- View/download PDF
36. Response to Referee 1
- Author
-
Stefan Muckenhuber
- Published
- 2017
- Full Text
- View/download PDF
37. Response to Referee 2
- Author
-
Stefan Muckenhuber
- Published
- 2017
- Full Text
- View/download PDF
38. High resolution sea ice monitoring using space borne Synthetic Aperture Radar
- Author
-
Stefan Muckenhuber
- Subjects
Matematikk og Naturvitenskap: 400 [VDP] - Abstract
Sea ice represents a major factor in the climate system and updated knowl- edge about sea ice conditions is important for shipping and offshore industry, local communities and others. Due to its remote location and strong variabil- ity in extent and motion, satellite observations are among the most important data sources for sea ice monitoring. Considering the polar night and 60 − 90 % cloud coverage over the Arctic, the most reliable sensors for year round, high resolution sea ice monitoring are Synthetic Aperture Radar (SAR) that oper- ate independent of solar illumination and cloud conditions. In the framework of this thesis, the author developed and applied methods for deriving high resolution sea ice information from space borne SAR imagery. A satellite database displaying the area of Svalbard has been established for the time period 2000-2014 and more than 3300 manual interpretations were conducted to distinguish fast ice, drift ice and open water in Isfjorden and Hornsund. The resulting time series revealed a significant reduction of fast ice coverage when comparing the time period 2000-2005 and 2006-2014. The relationship between sea ice, atmosphere and ocean in the two considered fjords has been discussed by comparing fast ice coverage to sea surface temperature from satellite measurements, surface temperature from weather stations and ocean heat content from CTD data. To derive automatic sea ice/water classification of dual polarisation Radarsat-2 SAR imagery, an algorithm based on texture features and support vector machine has been developed and applied opera- tionally in the period 2013 until 2015. Validating the algorithm against 2700 manually derived ice charts from the Norwegian Meteorological Institute revealed an accuracy of 91 4 %. The algorithm showed better performance in winter than in summer. To retrieve sea ice motion information from consecutive SAR images, a feature-tracking algorithm has been developed for Sentinel-1 data based on ORB (Oriented FAST and Rotated BRIEF). The algorithm locates corners, describes the surrounding area and and connects similar corners from one image to the next. The main advantages of the developed feature-tracking algorithm are the computational efficiency and the independence of the vectors in terms of position, lengths, direction and rotation. However, the vector distribution is not controlled by the user. To overcome this issue, a combined algorithm including a pattern-matching ap- proach has been developed as a successor of the introduced feature-tracking algorithm. Based on a filtered feature-tracking vector field, drift and rotation on the entire SAR scene are estimated. This initial drift field limits the search area for a consecutive pattern-matching algorithm that provides small to medium scale adjustments of drift direction, length and rotation. Assessing the potential performance of the combined algorithm with buoy GPS data using 240 Sentinel-1 image pairs yielded a logarithmic normal distribution of the displacement difference with a median at 352.9 m using HV polarisation and 535.7 m using HH polarisation.
- Published
- 2017
39. Open-source sea ice drift algorithm for Sentinel-1 SAR imagery using a combination of feature-tracking and pattern-matching
- Author
-
Stefan Muckenhuber and Stein Sandven
- Abstract
An open-source sea ice drift algorithm for Sentinel-1 SAR imagery is introduced based on the combination of feature-tracking and pattern-matching. A computational efficient feature-tracking algorithm produces an initial drift estimate and limits the search area for the pattern-matching, that provides small to medium scale drift adjustments and normalised cross correlation values as quality measure. The algorithm is designed to utilise the respective advantages of the two approaches and allows drift calculation at user defined locations. The pre-processing of the Sentinel-1 data has been optimised to retrieve a feature distribution that depends less on SAR backscatter peak values. A recommended parameter set for the algorithm has been found using a representative image pair over Fram Strait and 350 manually derived drift vectors as validation. Applying the algorithm with this parameter setting, sea ice drift retrieval with a vector spacing of 8 km on Sentinel-1 images covering 400 km x 400 km, takes less than 3.5 minutes on a standard 2.7 GHz processor with 8 GB memory. For validation, buoy GPS data, collected in 2015 between 15th January and 22nd April and covering an area from 81° N to 83.5° N and 12° E to 27° E, have been compared to calculated drift results from 261 corresponding Sentinel-1 image pairs. We found a logarithmic distribution of the error with a peak at 300 m. All software requirements necessary for applying the presented sea ice drift algorithm are open-source to ensure free implementation and easy distribution.
- Published
- 2016
- Full Text
- View/download PDF
40. Operational algorithm for ice/water classification on dual-polarized RADARSAT-2 images
- Author
-
Natalia Zakhvatkina, Anton Korosov, Stefan Muckenhuber, Stein Sandven, and Mohamed Babiker
- Abstract
Synthetic aperture radar (SAR) data from RADARSAT-2 (RS2) taken in dual-polarization mode provide additional information for discriminating sea ice and open water compared to single-polarization data. We have developed a fully automatic algorithm to distinguish between open water (rough/calm) and sea ice based on dual-polarized RS2 SAR images. Several technical problems inherent in RS2 data were solved on the pre-processing stage including thermal noise reduction in HV-polarization channel and correction of angular backscatter dependency on HH-polarization. Texture features are used as additional information for supervised image classification based on Support Vector Machines (SVM) approach. The main regions of interest are the ice-covered seas between Greenland and Franz Josef Land. The algorithm has been trained using 24 RS2 scenes acquired during winter months in 2011 and 2012, and validated against the manually derived ice chart product from the Norwegian Meteorological Institute. Between 2013 and 2015, 2705 RS2 scenes have been utilised for validation and the average classification accuracy has been found to be 91 ± 4 %.
- Published
- 2016
- Full Text
- View/download PDF
41. Sea ice cover in Isfjorden and Hornsund 2000–2014 by using remote sensing
- Author
-
Anton Korosov, Frank Nilsen, Stein Sandven, and Stefan Muckenhuber
- Subjects
geography ,geography.geographical_feature_category ,Remote sensing (archaeology) ,Sea ice ,Cover (algebra) ,Geology ,Remote sensing - Abstract
A satellite database including 16 555 satellite images and ice charts displaying the area of Isfjorden, Hornsund and the Svalbard region has been established with focus on the time period 2000–2014. 3319 manual interpretations of sea ice conditions have been conducted, resulting in two time series dividing the area of Isfjorden and Hornsund into "Fast ice", "Drift ice" and open "Water". The maximum fast ice coverage of Isfjorden is > 40 % in the periods 2000–2005 and 2009–2011 and stays < 30 % in 2006–2008 and 2012–2014. Fast ice cover in Hornsund reaches > 40 % in all considered years, except for 2012 and 2014, where the maximum stays < 20 %. The mean seasonal cycles of fast ice in Isfjorden and Hornsund show monthly averaged values of less than 1 % between July and November and maxima in March (Isfjorden, 35.7 %) and April (Hornsund, 42.1 %) respectively. A significant reduction of the monthly averaged fast ice coverage is found when comparing the time periods 2000–2005 and 2006–2014. The seasonal maximum decreases from 57.5 to 23.2 % in Isfjorden and from 52.6 to 35.2 % in Hornsund. A new concept, called "days of fast ice coverage" (DFI), is introduced for quantification of the interannual variation of fast ice cover, allowing for comparison between different fjords and winter seasons. Considering the time period from 1 March until end of sea ice season, the mean DFI values for 2000–2014 are 33.1 ± 18.2 DFI (Isfjorden) and 42.9 ± 18.2 DFI (Hornsund). A distinct shift to lower DFI values is observed in 2006. Calculating a mean before and after 2006 yields a decrease from 50 to 22 DFI for Isfjorden and from 56 to 34 DFI for Hornsund.
- Published
- 2015
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.