8,149 results on '"Propagation of uncertainty"'
Search Results
2. Uncertainty propagation in absolute metabolite quantification for in vivo MRS of the human brain.
- Author
-
Instrella, Ronald and Juchem, Christoph
- Subjects
MONTE Carlo method ,MEASUREMENT errors ,CORRECTION factors ,MACROMOLECULES - Abstract
Purpose: Absolute spectral quantification is the standard method for deriving estimates of the concentration from metabolite signals measured using in vivo proton MRS (1H‐MRS). This method is often reported with minimum variance estimators, specifically the Cramér‐Rao lower bound (CRLB) of the metabolite signal amplitude's scaling factor from linear combination modeling. This value serves as a proxy for SD and is commonly reported in MRS experiments. Characterizing the uncertainty of absolute quantification, however, depends on more than simply the CRLB. The uncertainties of metabolite‐specific (T1m, T2m), reference‐specific (T1ref, T2ref), and sequence‐specific (TR, TE) parameters are generally ignored, potentially leading to an overestimation of precision. In this study, the propagation of uncertainty is used to derive a comprehensive estimate of the overall precision of concentrations from an internal reference. Methods: The propagated uncertainty is calculated using analytical derivations and Monte Carlo simulations and subsequently analyzed across a set of commonly measured metabolites and macromolecules. The effect of measurement error from experimentally obtained quantification parameters is estimated using published uncertainties and CRLBs from in vivo 1H‐MRS literature. Results: The additive effect of propagated measurement uncertainty from applied quantification correction factors can result in up to a fourfold increase in the concentration estimate's coefficient of variation compared to the CRLB alone. A case study analysis reveals similar multifold increases across both metabolites and macromolecules. Conclusion: The precision of absolute metabolite concentrations derived from 1H‐MRS experiments is systematically overestimated if the uncertainties of commonly applied corrections are neglected as sources of error. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Exploring uncertainty in hyper-viscoelastic properties of scalp skin through patient-specific finite element models for reconstructive surgery.
- Author
-
Song, Gyohyeon, Gosain, Arun K., Buganza Tepole, Adrian, Rhee, Kyehan, and Lee, Taeksang
- Abstract
AbstractUnderstanding skin responses to external forces is crucial for post-cutaneous flap wound healing. However, the
in vivo viscoelastic behavior of scalp skin remains poorly understood. Personalized virtual surgery simulations offer a way to study tissue responses in relevant 3D geometries. Yet, anticipating wound risk remains challenging due to limited data on skin viscoelasticity, which hinders our ability to determine the interplay between wound size and stress levels. To bridge this gap, we reexamine three clinical cases involving scalp reconstruction using patient-specific geometric models and employ uncertainty quantification through a Monte Carlo simulation approach to study the effect of skin viscoelasticity on the final stress levels from reconstructive surgery. Utilizing the generalized Maxwell modelvia the Prony series, we can parameterize and efficiently sample a realistic range of viscoelastic response and thus shed light on the influence of viscoelastic material uncertainty in surgical scenarios. Our analysis identifies regions at risk of wound complications based on reported threshold stress values from the literature and highlights the significance of focusing on long-term responses rather than short-term ones. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
4. The Structure of Complex Cognitive Models
- Author
-
Leve, Robert and Leve, Robert
- Published
- 2022
- Full Text
- View/download PDF
5. Skew-normal distribution model for rainfall uncertainty estimation in a distributed hydrological model.
- Author
-
Salgado-Castillo, Félix, Barrios, Miguel, and Velez Upegui, Jorge
- Subjects
- *
HYDROLOGIC models , *DISTRIBUTION (Probability theory) , *GAUSSIAN distribution , *METEOROLOGICAL stations , *RAIN gauges , *SKEWNESS (Probability theory) , *PREDICATE calculus - Abstract
Despite the progress made by numerous contributions in recent decades on uncertainty in hydrological simulation, there are still knowledge gaps in estimating uncertainty sources, especially associated with precipitation. The aim of this study was to determine the precipitation uncertainty through an error model based on the skew normal distribution function and to evaluate the effect of its propagation towards the simulated flow with the TETIS distributed hydrological model in a poorly instrumented tropical Andean basin. The results show the performance of the hydrological model is more sensitive to the location of the meteorological station used than to the number of stations employed in a real case with scarce information. Implementing the Bayesian approach for the study of uncertainty in input data such as precipitation is essential for its quantification, improving the knowledge of how this source of error propagates to the results of the hydrological simulation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. An accurate and consistent procedure for the evaluation of measurement uncertainty
- Author
-
Robin Willink
- Subjects
Propagation of error ,Propagation of uncertainty ,Frequentist statistics ,Bayesian statistics ,Interpretation of probability ,Electric apparatus and materials. Electric circuits. Electric networks ,TK452-454.4 - Abstract
The statistical methodology described in the Guide to the Expression of Uncertainty in Measurement (GUM) is known to be flawed. Type A evaluation in the GUM appeals to the classical idea that probability represents the frequency-behaviour of errors in a measurement procedure, while Type B evaluation seems to make unknown constants the subjects of probability distributions, which is not permitted in the classical approach. This paper shows how Type B evaluation can be understood in terms of error distributions, so that the GUM methodology can be made coherent and hence extensible. Subsequently, a modification is described for better properties and performance.
- Published
- 2021
- Full Text
- View/download PDF
7. Uncertainty Model for Template Feature Matching
- Author
-
Zhang, Hongmou, Grießbach, Denis, Wohlfeil, Jürgen, Börner, Anko, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Paul, Manoranjan, editor, Hitoshi, Carlos, editor, and Huang, Qingming, editor
- Published
- 2018
- Full Text
- View/download PDF
8. Uncertainty propagation in Pu isotopic composition calculation by gamma spectrometry: theory versus experiment.
- Author
-
Sarkar, Arnab
- Subjects
UNCERTAINTY ,NUCLEAR industry ,FUEL cycle ,SPECTROMETRY ,DECISION making ,GAMMA ray spectrometry ,ALPHA ray spectrometry ,SCINTILLATION spectrometry - Abstract
Calculation and reporting of total uncertainties are essential criteria in the nuclear industry since measured results are used in decision making. Plutonium isotopic compositions (Pu IC) in different matrices are required at various stages of a close-loop nuclear fuel cycle. Under- and/or over-estimating uncertainties in Pu IC may result in avoidable radiological emergencies. In this work, we present the uncertainty budget for Pu IC determination using 120–450 keV gamma emission lines recorded with an HPGe. Detailed uncertainty propagation equations based on the propagation of partial derivatives are constructed and solved. Effects of the individual uncertainties on the total uncertainty are studied for different counting durations starting from 5 min up to 24 h. Results are compared with TIMS results and the theoretically calculated uncertainties were verified with multiple experimental data. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Measurement uncertainty estimation for derived biological quantities.
- Author
-
Rigo-Bonnin, Raúl and Canalias, Francesca
- Published
- 2020
- Full Text
- View/download PDF
10. ISO/TS 20914:2019 – a critical commentary.
- Author
-
Farrance, Ian, Frenkel, Robert, and Badrick, Tony
- Subjects
- *
INTERNATIONAL normalized ratio , *GLOMERULAR filtration rate , *MEDICAL laboratories - Abstract
The long-anticipated ISO/TS 20914, Medical laboratories – Practical guidance for the estimation of measurement uncertainty, became publicly available in July 2019. This ISO document is intended as a guide for the practical application of estimating uncertainty in measurement (measurement uncertainty) in a medical laboratory. In some respects, the guide does indeed meet many of its stated objectives with numerous very detailed examples. Even though it is claimed that this ISO guide is based on the Evaluation of measurement data – Guide to the expression of uncertainty in measurement (GUM), JCGM 100:2008, it is with some concern that we believe several important statements and statistical procedures are incorrect, with others potentially misleading. The aim of this report is to highlight the major concerns which we have identified. In particular, we believe the following items require further comment: (1) The use of coefficient of variation and its potential for misuse requires clarification, (2) pooled variance and measurement uncertainty across changes in measuring conditions has been oversimplified and is potentially misleading, (3) uncertainty in the results of estimated glomerular filtration rate (eGFR) do not include all known uncertainties, (4) the international normalized ratio (INR) calculation is incorrect, (5) the treatment of bias uncertainty is considered problematic, (6) the rules for evaluating combined uncertainty in functional relationships are incomplete, and (7) specific concerns with some individual statements. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
11. Reassessing the Quality of Sea‐Ice Deformation Estimates Derived From the RADARSAT Geophysical Processor System and Its Impact on the Spatiotemporal Scaling Statistics.
- Author
-
Bouchat, Amélie and Tremblay, Bruno
- Subjects
GEOPHYSICS ,SPATIOTEMPORAL processes ,HYDROGRAPHY ,ACOUSTIC imaging ,CLIMATE change - Abstract
We reassess the trajectory errors inherent to sea‐ice deformation estimates with a new propagation of uncertainty derivation and show that previous formulations applied to deformation estimates from the RADARSAT Geophysical Processor System (RGPS) are either too high due to incorrect assumptions or too low due to neglected terms in certain cases. We show that when the resulting signal‐to‐noise ratios are used to discriminate the deformation estimates based on their quality, as done for buoy records, the spatiotemporal scaling exponents for the mean total deformation rate increase, especially at smaller scale, such that a space‐time coupling of the scaling—which is otherwise absent—emerges from the RGPS deformation data set, in accord with previous analyses performed with buoy observations. We also show that the preprocessing method used to reduce the effects of irregular sampling of the Lagrangian deformation fields can significantly impact the value of the deformation statistics and could possibly explain part of previous discrepancies between deformation statistics obtained with buoy records and large‐scale synthetic aperture radar (SAR) imagery. Specifically, we show that spurious lines of deformation appear when interpolating RGPS trajectories that presenttemporal sampling inconsistencies. In the context of using observed sea‐ice deformation statistics to constrain and improve the performance of sea‐ice models, high confidence in the observed deformation field statistics is necessary. Using appropriate, well‐documented, methods to derive the set of statistics to be reproduced by models therefore becomes crucial. Plain Language Summary: Sea‐ice in the Arctic Ocean deforms in well‐defined lines. These linear deformation features are important for the climate system because they regulate the interaction between the ocean beneath the ice and the atmosphere above the ice. It is therefore of interest to know the statistical properties of these deformation features in order to reproduce them with sea‐ice models that are used for climate studies. We show in this study that the statistical properties of sea‐ice deformations obtained from satellite observations and used to evaluate sea‐ice models are very sensitive to the methods applied to obtain the deformation fields. Depending on the method used, artificial behaviors can appear in the observed deformation statistics, which then have no physical meaning. As such, care must be taken when analyzing the statistical properties of the observed deformation fields for them to have a physical significance. Our analysis also allows us to explain differences in previous studies that used different methods to analyze the statistical properties of the observed deformation fields. Key Points: Deformation statistics from the RADARSAT Geophysical Processor System are sensitive to the preprocessing methodsSpace‐time coupling of the scaling emerges when deformation estimates are discriminated using their signal‐to‐noise ratioInterpolation of trajectories in time leads to larger scaling exponents and spurious space‐time coupling [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
12. Review of Uncertainty Quantification of Measurement and Computational Modeling in EMC Part I: Measurement Uncertainty.
- Author
-
Carobbi, Carlo F. M., Lallechere, Sebastien, and Arnaut, Luk R.
- Subjects
- *
MONTE Carlo method , *ELECTROMAGNETIC compatibility , *UNCERTAINTY , *UNITS of measurement , *CENTRAL limit theorem - Abstract
In this two-parts paper, methods for the quantification of uncertainty in measurement and computational modeling are reviewed, with an emphasis on applications in electromagnetic compatibility (EMC). The current status of international standards relating to measurement uncertainty in EMC is provided (in part I), as well as a review of selected alternative methods and recent developments in the domain of computational modeling and empirical uncertainty quantification (in part II). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
13. Long-term effects of creep and shrinkage on the structural behavior of balanced cantilever prestressed concrete bridges
- Author
-
Andrade Borges, Emilia (author) and Andrade Borges, Emilia (author)
- Abstract
This thesis addresses the topic of ongoing (excessive) deformations observed in balanced cantilever prestressed concrete bridges all over the world. Many authors attribute this behavior to the time-dependent phenomena of creep and shrinkage. Balanced cantilever bridges are classified as creep-sensitive structures, and for that reason, a detailed analysis of the long-term structural behavior, such as deformations and prestress losses, is recommended. However, during the design of these bridges, commonly used code-based models generally tend to underestimate the long-term creep and shrinkage effects. Additionally, various (simplifying) assumptions are made when modeling these bridges, making their actual creep and shrinkage behavior unclear. This work aims to investigate whether the long-term effects of creep and shrinkage are indeed a plausible explanation for the excessive and ongoing deflections detected in a specific balanced cantilever bridge in the Netherlands: the Rooyensteinse Brug. This bridge, inaugurated in 1977 in Zoelen, currently exhibits a deflection at midspan of 0.43 m, more than two times what was anticipated in bridge design. To investigate this behavior, a detailed two-and-a-half-dimensional finite element model was developed, incorporating a time-dependent phased analysis accounting for the construction phases. Creep and shrinkage effects were incorporated into the concrete material model through creep compliance and shrinkage strain curves. A sensitivity study is conducted to analyze the impact of: (i) different code-based models for creep and shrinkage, (ii) accounting for the large creep and shrinkage model uncertainties, (iii) different maturities on the creep compliance curve, and (iv) considering cross-sectional variability in the drying characteristics. The main findings showed that commonly used code-based models, including the current standard Eurocode 2, significantly underestimate the long-term (multi-decade) deflectio, V&R Kokerbruggen, strategic collaboration "Vervanging en Renovatie", Civil Engineering | Structural Engineering | Concrete Structures
- Published
- 2023
14. Quantifying the range of future glacier mass change projections caused by differences among observed past-climate datasets.
- Author
-
Watanabe, Megumi, Yanagawa, Aki, Watanabe, Satoshi, Hirabayashi, Yukiko, and Kanae, Shinjiro
- Subjects
- *
GENERAL circulation model , *GLACIERS , *ATMOSPHERIC temperature , *ABLATION (Glaciology) - Abstract
Observed past climate data used as input in glacier models are expected to differ among datasets, particularly those for precipitation at high elevations. Differences among observed past climate datasets have not yet been described as a cause of uncertainty in projections of future changes in glacier mass, although uncertainty caused by varying future climate projections among general circulation models (GCMs) has often been discussed. Differences among observed past climate datasets are expected to propagate as uncertainty in future changes in glacier mass due to bias correction of GCMs and calibration of glacier models. We project ensemble future changes in the mass of glaciers in Asia through the year 2100 using a glacier model. A set of 18 combinations of inputs, including two observed past air temperature datasets, three observed past precipitation datasets, and future air temperature and precipitation projections from three GCMs were used. The uncertainty in projected changes in glacier mass was partitioned into three distinct sources: GCM uncertainty, observed past air temperature uncertainty, and observed past-precipitation uncertainty. Our findings indicate that, in addition to the differences in climate projections among GCMs, differences among observed past climate datasets propagate fractional uncertainties of about 15% into projected changes in glacier mass. The fractional uncertainty associated with observed past precipitation was 33–50% that of the observed air temperature. Differences in observed past air temperatures and precipitation did not propagate equally into the ultimate uncertainty of glacier mass projection when ablation was dominant. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
15. Sensitivity analysis and propagation of uncertainty for the simulation of vehicle pass-by noise.
- Author
-
Hamdad, Hichem, Pézerat, Charles, Gauvreau, Benoit, Locqueteau, Christophe, and Denoual, Yannick
- Subjects
- *
TRAFFIC noise , *MICROPHONES , *NOISE control , *ACOUSTIC wave propagation , *COMPUTER simulation - Abstract
Abstract The study presented in this paper aims at developing an aid to the modeling of the pass-by noise of a vehicle, as called for in regulatory testing. The goal is to accurately predict and evaluate noise emissions earlier in the vehicle development cycle, i.e. before the industrialization stage. For these reasons, a synthesis model is developed. This model combines the acoustic flow of the most influent sources (powertrain, intake, exhaust tail, exhaust line, tire/road noise), with the acoustic transfer between these sources and the microphone located at 7.5 m from the axis of the test track, taking into consideration the motion of the vehicle. The input parameters of these models are considered uncertain; therefore, an uncertainty quantification study is first performed. These uncertainties are then propagated in these models using quasi-Monte Carlo methods. Finally, a sensitivity analysis study is carried out to study the impact of the variability of the input parameters on the variability of the output noise levels of the synthesis models. Influential parameters have been identified and classified by vehicle family. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
16. Flood hazard assessment of the Rhône River revisited with reconstructed discharges from lake sediments.
- Author
-
Evin, Guillaume, Wilhelm, Bruno, and Jenny, Jean-Philippe
- Subjects
- *
FLOOD damage prevention , *LAKE sediments , *CLIMATE change , *SPATIO-temporal variation , *SEASONAL variations in biogeochemical cycles - Abstract
Abstract Accurate flood hazard assessments are crucial for adequate flood hazard mapping and hydraulic infrastructure design. The choice of an acceptable and cost-effective solution for such assessments depends upon the estimation of quantiles for different characteristics of floods, usually maximum discharges. However, gauge series usually have a limited time length and, thereby, quantile estimates associated to high return periods are subject to large uncertainties. To overcome this limitation, reconstructed flood series from historical, botanical or geological archives can be incorporated. In this study, we propose a novel approach that i) combines classic series of observations with paleodischarges (of the Rhône River) reconstructed from open lake sediments (Lake Bourget, Northwestern Alps, France) and ii) propagates uncertainties related to the reconstruction method during the estimation of extreme quantiles. A Bayesian approach is adopted in order to properly treat the non-systematic nature of the reconstructed flow data, as well as the uncertainties related to the reconstruction method. While this methodology has already been applied to reconstruct maximum discharges from historical documents, tree rings or fluvial sediments, similar applications need to be tested today on open lake sediments as they are one of the only archives that provide long and continuous paleoflood series. Reconstructed sediment volumes being subject to measurement errors, we evaluate and account for this uncertainty, along with the uncertainty related to the reconstruction method, the parametric uncertainty, and the rating-curve errors for systematic gauged flows by propagating these uncertainties through the modeling chain. Reconstructed maximum discharges appear to largely overcome values of observations, reaching values of approximately 2,600, 4,200, 2,450 and 2,500 m3/s in 1689, 1711, 1733 and 1737 respectively, which correspond to historically-known catastrophic floods. Extreme quantiles are estimated using direct measurements of maximum discharges (1853-2004) only and then combined to the sedimentary information (1650-2013). The comparison of the resulting estimates demonstrates the added value of the sedimentary information. In particular, the four historical catastrophic floods are very unlikely if only direct observations are considered for quantile estimations. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
17. Zero Velocity Detection Without Motion Pre-Classification: Uniform AI Model for All Pedestrian Motions (UMAM)
- Author
-
Yacouba Kone, Ni Zhu, Valérie Renaudin, Géolocalisation (AME-GEOLOC), and Université Gustave Eiffel
- Subjects
Computer science ,ZERO-VELOCITY DETECTION ,LEGGED LOCOMOTION ,01 natural sciences ,Field (computer science) ,EXTRACTION DE CARACTERISTIQUES ,[SPI]Engineering Sciences [physics] ,Velocity Moments ,ZERO VELOCITY UPDATE ,SENSORS ,UNITE DE MESURE INERTIELLE (IMU) ,Instrumentation ,CHAUSSURES ,ESSAI EN MILIEU OUVERT ,Detector ,Zero (complex analysis) ,LOCOMOTION SUR PIED ,UNITE DE MESURE ,PEDESTRIAN NAVIGATION ,SITES D&apos ,NAVIGATION ,MACHINE LEARNING ,Algorithm ,OPEN AREA TEST SITES ,FEATURE EXTRACTION ,FOOT ,CAPTEURS ,ESSAI ,Computation ,AUTOMATIQUE ,DETECTION DE LA VITESSE NULLE ,INERTIAL SENSORS ,APPRENTISSAGE ,FOOTWEAR ,DISPOSITIFS DE POSITIONNEMENT MONTES SUR LE PIED ,PIETON ,Set (abstract data type) ,MISE A JOUR DE LA VITESSE NULLE ,Inertial measurement unit ,NAVIGATION PEDESTRE ,PIED ,CAPTEUR ,Electrical and Electronic Engineering ,CAPTEURS INERTIELS ,APPRENTISSAGE AUTOMATIQUE ,Propagation of uncertainty ,FOOT-MOUNTED POSITIONING DEVICES ,NAVIGATION INERTIELLE ,010401 analytical chemistry ,CAPTEUR INERTIEL ,0104 chemical sciences ,INERTIAL MEASUREMENT UNIT (IMU) - Abstract
Foot-mounted positioning devices are becoming more and more popular in the different application field. For example, inertial sensors are now embedded in safety shoes to monitor security. They allow positioning with zero velocity update to bound the error growth of foot-mounted inertial sensors. High positioning accuracy depends on robust zero velocity detector (ZVD). Existing Artificial Intelligent (AI)-based methods classify the pedestrian dynamics to adjust ZVD at the cost of high computation costs and error propagation from miss-classification. We propose a machine learning model to detect zero velocity moments without any pre-classification step, named Uniform AI Model for All pedestrian Motions (UMAM). Performance is evaluated by benchmarking on two new subjects of opposite gender and different size, not included in the training data set, over complex indoor/outdoor paths of 2 km for subject 1 and 2.1 km for subject 2. We obtain an average 2D loop closure error of less than 0.37%.
- Published
- 2022
18. Platoon Trajectories Generation: A Unidirectional Interconnected LSTM-Based Car-Following Model
- Author
-
Yangxin Lin, Ping Wang, Yang Zhou, Fan Ding, Huachun Tan, and Chen Wang
- Subjects
050210 logistics & transportation ,Propagation of uncertainty ,Computer science ,business.industry ,Mechanical Engineering ,05 social sciences ,Process (computing) ,Block diagram ,Computer Science Applications ,Dimension (vector space) ,Traffic engineering ,0502 economics and business ,Automotive Engineering ,Trajectory ,Platoon ,Representation (mathematics) ,business ,Algorithm - Abstract
Car-following models have been widely applied and made remarkable achievements in traffic engineering. However, the traffic micro-simulation accuracy of car-following models in a platoon level, especially during traffic oscillations, still needs to be enhanced. Rather than using traditional individual car-following models, we proposed a new trajectory generation approach to generate platoon level trajectories given the first leading vehicle's trajectory. In this article, we discussed the temporal and spatial error propagation issue for the traditional approach by a car following block diagram representation. Based on the analysis, we pointed out that error comes from the training method and the model structure. In order to fix that, we adopt two improvements on the basis of the traditional LSTM-based car-following model. We utilized a scheduled sampling technique during the training process to solve the error propagation in the temporal dimension. Furthermore, we developed a unidirectional interconnected LSTM model structure to extract trajectories features from the perspective of the platoon. As indicated by the systematic empirical experiments, the proposed novel structure could efficiently reduce the temporal-spatial error propagation. Compared with the traditional LSTM-based car-following model, the proposed model has almost 40% less error. The findings will benefit the design and analysis of micro-simulation for platoon-level car-following models.
- Published
- 2022
19. A simplified approach for efficiently simulating submarine slump generated tsunamis.
- Author
-
Lo, Peter H.-Y. and Liu, Philip L.-F.
- Subjects
- *
TSUNAMIS , *TSUNAMI warning systems , *LANDSLIDES , *SUBMARINES (Ships) , *SENSITIVITY analysis - Abstract
A simplified approach was proposed to efficiently simulate the tsunamis generated by a submarine slump. The landslide tsunami generation process was simulated using a long-wave model, simplifying the wave generation problem as a slump traveling down a plane slope with a prescribed trajectory. As a result, the landslide tsunami generation process was parameterized by 11 input parameters. The wave profile at the end of the wave generation process can then be specified as the initial conditions in any numerical tsunami propagation model to study the subsequent tsunami propagation. To demonstrate the capability of this new landslide tsunami generation approach, we used it in combination with an existing Boussinesq wave solver to simulate the 1998 Papua New Guinea landslide tsunami. The results based on the newly calculated physics-based wave profile compare reasonably well with field measurements and the results based on an existing tuning-based wave profile. Sensitivity tests were performed to examine the sensitivity of the runup results to each of the 11 input parameters. Six sets of 100-case Monte Carlo experiments were conducted to investigate the propagation of uncertainty from the input parameters to the runup results. Runup uncertainty was found to be approximately 1.5 times the parameter uncertainty, highlighting the uncertain nature of landslide tsunamis. • A new approach is proposed to efficiently simulate submarine slump generated tsunamis. • The new approach shows good predictive ability for the 1998 Papua New Guinea tsunami. • Sensitivity analysis is performed for the 11 model input parameters. • More than 600 potential tsunami scenarios are simulated in Monte Carlo experiments. • Runup uncertainty is found to be 1.5 times the parameter uncertainty. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Error propagation model and optimal control method for the quality of remanufacturing assembly
- Author
-
Conghu Liu, Wenyi Li, Xiao Liu, and Cuixia Zhang
- Subjects
Statistics and Probability ,Propagation of uncertainty ,Artificial Intelligence ,Computer science ,media_common.quotation_subject ,General Engineering ,Quality (business) ,Optimal control ,Remanufacturing ,media_common ,Reliability engineering - Abstract
In order to improve the quality of remanufacturing assembly with uncertainty for the sustainability of remanufacturing industry, an error propagation model of the remanufacturing assembly process and its optimal control method are established. First, the state space model of error propagation is established by taking the work-in-process parameter errors of each process as the initial state of the procedure and the parameters of remanufactured parts and operation quantities as the input. Then, the quality control issue of remanufacturing assembly is transformed into a convex quadratic programming with constraints based on this model. Finally, the proposed method is used to control the remanufactured-crankshaft assembly quality. The experimental results show that the axial-clearance consistency and the crankshaft torque are improved, and the one-time assembly success rate of a remanufactured crankshaft is increased from 96.97%to 99.24%. This study provides a theoretical model and method support for the quality control of remanufacturing assembly and has a practical effect on improving the quality of remanufactured products.
- Published
- 2022
21. Accurate Statistical BER Analysis of DFE Error Propagation in the Presence of Residual ISI
- Author
-
Paul Kwon, Elad Alon, and Kunmo Kim
- Subjects
Propagation of uncertainty ,Computer science ,Equalizer ,Interference (wave propagation) ,Residual ,Expression (mathematics) ,symbols.namesake ,Additive white Gaussian noise ,Pulse-amplitude modulation ,symbols ,Bit error rate ,Electrical and Electronic Engineering ,Algorithm ,Computer Science::Information Theory - Abstract
This brief discusses the limitation of previously published statistical decision feedback equalizer (DFE) models when residual inter-symbol interference (ISI) is present and proposes a method that overcomes this limitation. The paper presents an accurate closed-form expression for the DFE with residual ISI and additive white Gaussian noise (AWGN). The analysis is extended to 4-level pulse amplitude modulation (PAM-4) to cover serial links of current interest. The results verify that the proposed model estimates an accurate bit error rate (BER).
- Published
- 2022
22. An analytical framework for local and global system kinematic reliability sensitivity of robotic manipulators
- Author
-
Jun Hong and Qiangqiang Zhao
- Subjects
Computer Science::Robotics ,Propagation of uncertainty ,Control theory ,Position (vector) ,Computer science ,Applied Mathematics ,Modeling and Simulation ,Expectation propagation ,Monte Carlo method ,Sensitivity (control systems) ,Kinematics ,Uncertainty analysis ,Reliability (statistics) - Abstract
This paper develops a novel analytical framework for system kinematic reliability sensitivity analysis of robotic manipulators, which can provide the analytical results of local and global reliability sensitivity defined on the pose and position errors. First, uncertainty analysis of the pose error of the end-effector is accomplished by virtue of the second-order closed-form error propagation formula on motion groups. Then, the system kinematic reliability, namely the probability of the system kinematic error located within the prescribed safe boundary, is analytically calculated. Specifically, the non-central chi-square approximation and expectation propagation technique are employed to compute the system kinematic reliability defined on the position and pose errors, respectively. On this basis, the local and global system kinematic reliability sensitivity of robotic manipulators are analytically obtained. Finally, the effectiveness of the proposed method is validated by comparison with the Monte Carlo simulation and Kriging modeling method. The results indicate the proposed method has good efficacy and efficiency in system kinematic reliability sensitivity analysis for robotic manipulators.
- Published
- 2022
23. Tracing design flood hydrograph uncertainty in reservoir flood control system
- Author
-
Yunyun Li, Yimin Wang, Bin Wu, Aijun Guo, and Jianxia Chang
- Subjects
Flood control ,Hydrology ,Propagation of uncertainty ,Flood myth ,Applied Mathematics ,Modeling and Simulation ,Flow (psychology) ,Environmental science ,Hydrograph ,Context (language use) ,Tracing ,Copula (probability theory) - Abstract
Increasing studies have highlighted that the remarkable inherent uncertainty in design flood hydrograph (DFH) can potentially undermine flood management decisions. In order to quantitatively trace the propagation of DFH uncertainty in reservoir flood control system, we propose a novel methodological framework including three corn parts. First, a copula-based DFH estimation model integrating Bayes’ theorem is presented to estimate DFH under model parameter uncertainty. Second, we perform an optimal reservoir operation model for flood control (OROMFC) with uncertain DFH as input variable to derive reservoir flood control operations (i.e. output variable). Third, an information theory-based model is designed to trace the DFH uncertainty propagation in reservoir flood control system. A reservoir flood control system in the Han River basin in China is selected as case study. Related results indicate that uncertainty in reservoir flood control operations reduces in comparison with the remarkable uncertainty in DFH due to the performance of the OROMFC. Specifically, uncertainty in reservoir flood control operations in periods close to peak flow is much smaller compared with that in other periods. The phenomenon further highlights the importance of reservoir flood control operations during periods prior to peak flow. Additionally, we find that uncertainty in flood peak of DFH is the dominant factor affecting reservoir flood control operations compared with that in flood volume of DFH. Moreover, we explore the impact of reservoir flood control capacity on reservoir flood operations in the context of DFH uncertainty. An interesting linear expression is found and fitted for identifying design flood events inducing reservoir overtopping under specific reservoir flood control capacity.
- Published
- 2022
24. Uncertainty Analysis in the Inverse of Equivalent Conductance Method for Dealing With Crosstalk in 2-D Resistive Sensor Arrays
- Author
-
Javier Martinez-Cesteros, Carlos Medrano-Sanchez, Raul Igual-Catalan, and Inmaculada Plaza-Garcia
- Subjects
Propagation of uncertainty ,Matrix (mathematics) ,Crosstalk (biology) ,Computer science ,Inverse ,Sensitivity (control systems) ,Electrical and Electronic Engineering ,Topology ,Instrumentation ,Measure (mathematics) ,Uncertainty analysis ,Physical quantity - Abstract
2-D resistive sensor arrays (RSAs) appear in many applications to measure physical quantities in a surface. However, they suffer from a crosstalk problem when the simplest configuration is used to address a row-column. Thus, the value of a single cell cannot be measured directly. Several hardware solutions have been proposed to solve it totally or partially but all of them make the circuit more complex. In a previous paper we proposed an innovative numerical solution to eliminate crosstalk after a complete scan of the matrix, which is named in this paper as Inverse of Equivalent Conductance Method (IECM). In the current study, we have analyzed the implications of the method for the uncertainty of the calculated cell resistance by first deriving the sensitivity of the solution and then applying uncertainty propagation theory. The theoretical results have been tested in simulated arrays and in a real 6x6 RSA with known values of resistances with good agreement. The uncertainty analysis is able to predict which values are reliable. In general, the lowest resistances of the array are better solved by IECM as expected. In addition, it is also shown that IECM has the potential to be adapted to other hardware configurations that reduce crosstalk, helping to overcome some of its limitations.
- Published
- 2022
25. Calibrating SoilGen2 for interglacial soil evolution in the Chinese Loess Plateau considering soil parameters and the effect of dust addition rhythm
- Author
-
Keerthika Nirmani Ranathunga, Qiuzhen Yin, Yanyan Yu, Peter Finke, and UCL - SST/ELI/ELIC - Earth & Climate
- Subjects
Technology and Engineering ,Dust deposition ,EVERGREEN ,Soil science ,Deposition (geology) ,Loess ,HOLOCENE ,Calibration ,CLAY ,Holocene ,Earth-Surface Processes ,Propagation of uncertainty ,RAINFALL INTERCEPTION ,EVAPOTRANSPIRATION ,SoilGen2 calibration ,Soil parameters ,Uncertainty ,VEGETATION RESTORATION ,Soil carbon ,FOREST ,Paleosol ,CARBON MODEL ,Earth and Environmental Sciences ,WATER-BALANCE ,Interglacial ,PEDOGENESIS ,Environmental science - Abstract
To better understand interglacial paleosol development by quantifying the paleosol development processes on the Chinese Loess Plateau (CLP), we need a soil genesis model calibrated for long timescales. Here, we calibrate a process-based soil genesis model, SoilGen2, by confronting simulated and measured soil properties for the Holocene and MIS-13 paleosols formed in the CLP for various parameter settings. The calibration was made sequentially on three major soil process formulations, including decalcification, clay migration and soil organic carbon, which are represented by various process parameters. The order of the tuned parameters was based on sensitivity analyses performed previously on the loess in West European and the CLP. After the calibration of the intrinsic soil process parameters, the effect of uncertainty of dust deposition rate on calibration results was assessed. Our results show that the simulated soil properties are very sensitive to ten reconstructed dust deposition scenarios, reflecting the propagation of uncertainty of dust deposition in model simulations. Our results also show the equal importance of calibrating soil process parameters and defining correct external forcings in the future use of soil models. Our calibrated model allows interglacial soil simulation in the CLP over long timescales.
- Published
- 2022
26. InSAR Phase Unwrapping by Deep Learning Based on Gradient Information Fusion
- Author
-
Chao Wang, Yixian Tang, Hong Zhang, Liutong Li, and Feng Gu
- Subjects
Propagation of uncertainty ,Mean squared error ,Computer science ,business.industry ,Deep learning ,Phase (waves) ,Geotechnical Engineering and Engineering Geology ,Class (biology) ,Robustness (computer science) ,Interferometric synthetic aperture radar ,Segmentation ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Algorithm - Abstract
Phase unwrapping is an important step in InSAR technology. At present, difficulties are encountered when using deep learning to solve the phase unwrapping problem because the fringe density of the actual interferogram varies, resulting in an imbalanced class of semantic segmentation. Deep learning cannot completely use gradient information, and it is difficult to address a large number of residues. In this Letter, a phase unwrapping semantic segmentation model based on gradient information fusion and improved PhaseNet network is proposed to solve the problem of imbalanced classification and error propagation. 21613 pairs of phase samples are constructed by using simulated and real Sentinel-1 InSAR Data. The experimental results show that the average classification accuracy of the method can reach 97%, and the mean square error is only 0.97. The average processing speed of 256 x 256 slices is only 0.5 seconds. Compared with the traditional methods and other deep learning methods, this method solves the problem of classification imbalance, and the use of fusion gradient information improves the efficiency of the algorithm, reduces the burden of network classification and the error propagation, showing increased robustness in the case of many residues and high fringe density.
- Published
- 2022
27. Significance of measured negative dead time of a radiation detector using two-source method for educational purpose
- Author
-
Rong-Jiun Sheu, Wey-Tsang Liu, Po-Wen Fang, Aswin kumar Anbalagan, Chih-Hao Lee, and Jen-Chieh Cheng
- Subjects
Physics ,Propagation of uncertainty ,Optics ,business.industry ,Monte Carlo method ,Dead time ,business ,Particle detector - Abstract
This paper emphasizes on the correction of a misconception among the students about the presence of negative dead time values determined by using a two-source method. To specify the importance of t...
- Published
- 2021
28. Propagation of CMORPH rainfall errors to REW streamflow simulation mismatch in the upper Zambezi Basin
- Author
-
Tom Rientjes, Alemseged Tamiru Haile, Hodson Makurira, W. Gumindoga, Paolo Reggiani, Department of Water Resources, UT-I-ITC-WCC, and Faculty of Geo-Information Science and Earth Observation
- Subjects
Physical geography ,Watershed ,UT-Gold-D ,Calibration (statistics) ,Model calibration ,Drainage basin ,Hydrograph ,Structural basin ,Error ,Water balance ,Streamflow ,Earth and Planetary Sciences (miscellaneous) ,Objective functions ,Water Science and Technology ,geography ,Propagation of uncertainty ,QE1-996.5 ,geography.geographical_feature_category ,Geology ,Zambezi basin ,GB3-5030 ,Climatology ,ITC-ISI-JOURNAL-ARTICLE ,Environmental science ,Bias correction ,ITC-GOLD - Abstract
Study region This research is carried out in the Kabompo Basin, a headwater catchment of the Zambezi Basin which has an area of 67,261 km2. Kabompo River originates in North-Western Province of Zambia between the Zambezi and Congo River Basins. Study focus This study focuses on error propagation of the National Oceanic and Atmospheric Administration (NOAA) Climate Prediction Center-MORPHing (CMORPH) rainfall product on simulated streamflow. Assessments are based on automated multi-objective calibration of the Representative Elementary Watershed (REW) model (2006–2012). Parameters of the model were optimized using the ɛ-NSGAII algorithm. Assessments on error propagation targeted streamflow modelling for hydrograph shape and volume, specific hydrograph characteristics, and water balance composition. New hydrological insights for the region By use of multiple objective functions, this study shows that uncorrected CMORPH results in substantial augmentation of rainfall error to streamflow simulation mismatch whereas bias corrected estimates result in attenuation of error. This study shows that analysis for water balance composition has great potential to improve application of satellite precipitation products in water management and decision making in the Zambezi basin. This study advises optimization of model parameters for each respective rainfall input data source so to identify outcomes and effects of respective rainfall data sources on the simulated water balance. Optimization of parameter sets by ɛ -NSGAII results in optimal parameter sets that attributes to findings on the water balance composition and closure.
- Published
- 2021
29. Multi-Threshold Detector With Fair Power Allocation Coefficients for NOMA Signals With Statistical CSI
- Author
-
Mahdi Majidi, S. Mohammad Saberali, and Toloo Analooei
- Subjects
Propagation of uncertainty ,Computer science ,Detector ,Computer Science Applications ,Power (physics) ,Single antenna interference cancellation ,Channel state information ,Modeling and Simulation ,Rician fading ,Telecommunications link ,Electrical and Electronic Engineering ,Algorithm ,Computer Science::Information Theory ,Communication channel - Abstract
In this letter, we modify the multi-threshold detector (MTD) for detection of downlink non-orthogonal multiple access (NOMA) signals in the Rician fading channel with statistical knowledge of channel state information (CSI). First, the error probability (Pe) for both MTD and successive interference cancellation (SIC) with statistical CSI is calculated. Then, we use the obtained Pe expression to fairly allocate power to different users in the sense that the maximum Pe of the users is minimized. Finally, computer simulations are performed, which show that MTD outperforms the SIC in the case that error propagation happens in the noise-free SIC.
- Published
- 2021
30. Uncertainty propagation for dropout-based Bayesian neural networks
- Author
-
Wataru Kumagai, Takafumi Kanamori, and Yuki Mae
- Subjects
Propagation of uncertainty ,Artificial neural network ,Uncertain data ,Computer science ,business.industry ,Cognitive Neuroscience ,Computer Science::Neural and Evolutionary Computation ,Bayesian probability ,Uncertainty ,Reproducibility of Results ,Bayes Theorem ,Context (language use) ,Bayesian inference ,Machine learning ,computer.software_genre ,Bayes' theorem ,Artificial Intelligence ,Neural Networks, Computer ,Artificial intelligence ,business ,Monte Carlo Method ,computer ,Dropout (neural networks) - Abstract
Uncertainty evaluation is a core technique when deep neural networks (DNNs) are used in real-world problems. In practical applications, we often encounter unexpected samples that have not seen in the training process. Not only achieving the high-prediction accuracy but also detecting uncertain data is significant for safety-critical systems. In statistics and machine learning, Bayesian inference has been exploited for uncertainty evaluation. The Bayesian neural networks (BNNs) have recently attracted considerable attention in this context, as the DNN trained using dropout is interpreted as a Bayesian method. Based on this interpretation, several methods to calculate the Bayes predictive distribution for DNNs have been developed. Though the Monte-Carlo method called MC dropout is a popular method for uncertainty evaluation, it requires a number of repeated feed-forward calculations of DNNs with randomly sampled weight parameters. To overcome the computational issue, we propose a sampling-free method to evaluate uncertainty. Our method converts a neural network trained using dropout to the corresponding Bayesian neural network with variance propagation. Our method is available not only to feed-forward NNs but also to recurrent NNs such as LSTM. We report the computational efficiency and statistical reliability of our method in numerical experiments of language modeling using RNNs, and the out-of-distribution detection with DNNs.
- Published
- 2021
31. Propagation Effects of Power Supply Fluctuation in Wireless Power Amplifiers
- Author
-
Park, Youngcheol, Yoon, Hoijin, Kim, Tai-hoon, editor, Stoica, Adrian, editor, Fang, Wai-chi, editor, Vasilakos, Thanos, editor, Villalba, Javier García, editor, Arnett, Kirk P., editor, Khan, Muhammad Khurram, editor, and Kang, Byeong-Ho, editor
- Published
- 2012
- Full Text
- View/download PDF
32. Analysis and Control of Uncertainty in Wireless Transmitting Devices
- Author
-
Park, Youngcheol, Kim, Tai-hoon, editor, Stoica, Adrian, editor, Fang, Wai-chi, editor, Vasilakos, Thanos, editor, Villalba, Javier García, editor, Arnett, Kirk P., editor, Khan, Muhammad Khurram, editor, and Kang, Byeong-Ho, editor
- Published
- 2012
- Full Text
- View/download PDF
33. Interpretation of the Knutson et al. (2020) hurricane projections, the impact on annual maximum wind-speed, and the role of uncertainty
- Author
-
Stephen Jewson
- Subjects
Propagation of uncertainty ,Environmental Engineering ,Climate change ,Storm ,Wind speed ,Consistency (statistics) ,Climatology ,Environmental Chemistry ,Environmental science ,Climate model ,Tropical cyclone ,Safety, Risk, Reliability and Quality ,Intensity (heat transfer) ,General Environmental Science ,Water Science and Technology - Abstract
Knutson et al. (BAMS 101:E303–E322, 2020) combined results from many studies to produce distributions of how tropical cyclone frequency and average intensity may change in the future. These distributions can be applied to risk models by using them to simulate multiple realisations of possible changes in future storm climate. Ideally the simulations would be performed to maintain consistency between the changes in frequency and average intensity, but it is not obvious how to do that, or whether it is even possible. Considering North Atlantic cyclones, we test three methods for simulating from Knutson et al. and find two that are consistent and one that is not. Using the best of the methods we find that there are no future scenarios in which weak storms increase in frequency while very intense storms decrease in frequency. We then apply changes in frequency and average intensity to a risk model for annual maximum intensity. After integrating over the uncertainty, we find that on a conditional basis all storms become more intense under climate change, and that the annual probability of the most intense storms increases by 19%. Annual maximum intensity decreases at intermediate levels of intensity as a result of the non-linear propagation of uncertainty. We also show that the changes in risk implied by the Knutson et al. results cannot be well approximated if the propagation of uncertainty is ignored. Future work should involve updating these results with tropical cyclone projections from the latest high-resolution climate models.
- Published
- 2021
34. Least-Squares Fitting of Multidimensional Spectra to Kubo Line-Shape Models
- Author
-
Christopher M. Cheatum and Kevin C. Robben
- Subjects
Propagation of uncertainty ,Scale invariance ,Residual ,Article ,Surfaces, Coatings and Films ,Diffusion ,Nonlinear system ,Nonlinear Dynamics ,Orders of magnitude (time) ,Norm (mathematics) ,Metric (mathematics) ,Line (geometry) ,Linear Models ,Materials Chemistry ,Statistical physics ,Least-Squares Analysis ,Physical and Theoretical Chemistry ,Algorithms ,Mathematics - Abstract
We report a comprehensive study of the efficacy of least-squares fitting of multidimensional spectra to generalized Kubo line-shape models and introduce a novel least-squares fitting metric, termed the scale invariant gradient norm (SIGN), that enables a highly reliable and versatile algorithm. The precision of dephasing parameters is between 8× and 50× better for nonlinear model fitting compared to that for the centerline-slope (CLS) method, which effectively increases data acquisition efficiency by 1–2 orders of magnitude. Whereas the CLS method requires sequential fitting of both the nonlinear and linear spectra, our model fitting algorithm only requires nonlinear spectra but accurately predicts the linear spectrum. We show an experimental example in which the CLS time constants differ by 60% for independent measurements of the same system, while the Kubo time constants differ by only 10% for model fitting. This suggests that model fitting is a far more robust method of measuring spectral diffusion than the CLS method, which is more susceptible to structured residual signals that are not removable by pure solvent subtraction. Statistical analysis of the CLS method reveals a fundamental oversight in accounting for the propagation of uncertainty by Kubo time constants in the process of fitting to the linear absorption spectrum. A standalone desktop app and source code for the least-squares fitting algorithm are freely available, with example line-shape models and data. We have written the MATLAB source code in a generic framework where users may supply custom line-shape models. Using this application, a standard desktop fits a 12-parameter generalized Kubo model to a 106 data-point spectrum in a few minutes.
- Published
- 2021
35. MUnCH: a calculator for propagating statistical and other sources of error in passive microrheology
- Author
-
Andrés Córdoba and Jay D. Schieber
- Subjects
Microrheology ,Physics ,Physics::Biological Physics ,Propagation of uncertainty ,Observational error ,Condensed Matter Physics ,Displacement (vector) ,Synthetic data ,Condensed Matter::Soft Condensed Matter ,Transformation (function) ,Position (vector) ,Brownian dynamics ,General Materials Science ,Statistical physics - Abstract
A complete propagation of error procedure for passive microrheology is illustrated using synthetic data from generalized Brownian dynamics. Moreover, measurement errors typical of bead tracking done with laser interferometry are employed. We use the blocking transformation method of Flyvbjerg and Petersen (J Chem Phys 91(1):461–466 1989) applicable to estimating statistical uncertainty in autocorrelations for any time series data, to account properly for the correlation in the bead position data. These contributions to uncertainty in correlations have previously been neglected when calculating the error in the mean-squared displacement of the probe bead (MSD). The uncertainty in the MSD can be underestimated by a factor of about 20 if the correlation in the bead position data is neglected. Using the generalized Stokes-Einstein relation, the uncertainty in the MSD is then propagated to the dynamic modulus. Uncertainties in the bead radius and the trap stiffness are also taken into account. A simple code used to aid in the calculations is provided.
- Published
- 2021
36. A Bayesian Approach to the Eagar–Tsai Model for Melt Pool Geometry Prediction with Implications in Additive Manufacturing of Metals
- Author
-
Prasanna V. Balachandran, Brendan J. Whalen, and Ji Ma
- Subjects
Propagation of uncertainty ,Materials science ,Monte Carlo method ,Bayesian probability ,General Materials Science ,Geometry ,Laser power scaling ,Molar absorptivity ,Parameter space ,Bayesian inference ,Porosity ,Industrial and Manufacturing Engineering - Abstract
This paper focuses on improving the melt pool geometry predictions and quantifying uncertainties using an adapted version of the Eagar–Tsai (E–T) model that incorporates temperature-dependent properties of the material as well as powder conditions. Additionally, Bayesian inference is employed to predict distributions for the E–T model input parameters of laser absorptivity and powder bed porosity by incorporating experimental results into the analysis. Monte Carlo uncertainty propagation is then used with these parameter distributions to estimate the melt pool depth and associated uncertainty. Our results for the 316L stainless steel suggest that both the absorptivity and powder bed porosity are strongly influenced by the laser power. In contrast, the scanning speed has only a marginal effect on both the absorptivity and powder bed porosity. We constructed a printability map using the Bayesian E–T model based on power-dependent input parameter values to demonstrate the merit of the approach. The Bayesian approach improved the accuracy in predicting the keyhole regions in the laser power-scan speed parameter space for the 316L stainless steel. Although applied to a specific adaptation of the E–T model, the method put forth can be extended to quantify uncertainties in other numerical models as well as in the estimation of unknown parameters.
- Published
- 2021
37. Co-Existing Preamble and Data Transmissions in Random Access for MTC With Massive MIMO
- Author
-
Jie Ding and Jinho Choi
- Subjects
Propagation of uncertainty ,Computer science ,05 social sciences ,MIMO ,050801 communication & media studies ,Antenna array ,0508 media and communications ,Signal-to-noise ratio ,Single antenna interference cancellation ,Electrical and Electronic Engineering ,Algorithm ,Throughput (business) ,Random access ,Data transmission - Abstract
In this article, a Co-existing Preamble and Data (CoPD) random access approach for machine-type communications (MTC) is proposed and studied in massive multiple input multiple output (MIMO) systems. Unlike conventional grant-free random access in which preambles are time-multiplexed with data, the proposed approach enables CoPD transmissions by different groups of devices, which leverages favourable propagation of massive MIMO and successive interference cancellation (SIC) technique. To fully understand the performance behavior of the proposed approach, the error characteristics of channel estimation (CE) due to SIC error propagation are investigated. To avoid the CE error variance growing arbitrarily with time that leads to data transmission failure, key conditions, with respect to preamble length $L$ , traffic intensity $\lambda $ , and average number of data packets per active device $\bar b$ , are derived. Under the conditions, we demonstrate that the throughput of the proposed CoPD approach cannot be arbitrarily high by increasing $\bar b$ and $\lambda $ even though the antenna array size, denoted by $M$ , grows arbitrarily in massive MIMO. Accordingly, its maximum throughput and success probability are theoretically obtained. Simulation results verify our analysis results and confirm that the maximum throughput of the proposed CoPD approach can be at least 4 times than that of the conventional one. Furthermore, its success probability approaches 1 when $M \to \infty $ as long as the derived conditions are met.
- Published
- 2021
38. Device-Free Activity Detection and Wireless Localization Based on CNN Using Channel State Information Measurement
- Author
-
Xiaofu Wu, Wei-Ping Zhu, Jun Yan, Wu Wei, Daniel P. K. Lun, and Lingpeng Wan
- Subjects
Propagation of uncertainty ,business.industry ,Computer science ,Feature extraction ,Convolutional neural network ,Activity recognition ,Channel state information ,Frequency domain ,Wireless ,Electrical and Electronic Engineering ,business ,Instrumentation ,Algorithm ,Wireless sensor network - Abstract
In this paper, a novel decoupled device-free activity detection and position estimation scheme is proposed using convolutional neural networks and channel state information (CSI) measurement as inputs. For the proposed scheme, the two processes of activity recognition and localization are realized in parallel but independently. Compared with the existing joint approaches, the proposed decoupled scheme is free of error propagation between two processes and achieves better performance especially for position estimation. We also propose a CSI based radio image construction using the amplitude measurement with temporal, spatial and frequency domain information. This has been proven very competitive for feature extraction, compared with the state-of-the-art methods. Extensive experimental and simulation results under a real test setup show the superiority of the proposed scheme.
- Published
- 2021
39. On Moment Estimation From Polynomial Chaos Expansion Models
- Author
-
Tom Lefebvre
- Subjects
0209 industrial biotechnology ,Propagation of uncertainty ,Control and Optimization ,Polynomial chaos ,Basis (linear algebra) ,Stochastic process ,Computer science ,Robust optimization ,020206 networking & telecommunications ,02 engineering and technology ,Moment (mathematics) ,020901 industrial engineering & automation ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Applied mathematics ,Uncertainty quantification ,Series expansion - Abstract
Polynomial Chaos Expansions (PCEs) offer an efficient alternative to assess the statistical properties of a model output taking into account the statistical properties of several uncertain model inputs, particularly, under the restriction of probing the forward model as little as possible. The use of PCE knows a steady increase in recent literature with applications spanning from mere system analysis to robust control design and optimization. The principle idea is to model the output as a series expansion, with the expansion basis depending only on the stochastic variables. The basis is then chosen so that it is better suited to support the uncertainty propagation. Once the expansion coefficients have been identified, accessing the statistical moments benefits the linearity of the expansion and the properties of the moment operator. As a result, the first two statistical moments of the model output can be computed easily using well known analytical expressions. For high-order moments, analytical expressions also exist but are inefficient to evaluate. In this letter we present three strategies to efficiently calculate high-order moments from the expansion model and provide an empirical study of the associated computation time supporting any potential users in making an informed choice.
- Published
- 2021
40. Probabilistic observation model correction using non-Gaussian belief fusion
- Author
-
Tomonari Furukawa and J. Josiah Steckenrider
- Subjects
Propagation of uncertainty ,Computer science ,Gaussian ,Monte Carlo method ,Probabilistic logic ,020206 networking & telecommunications ,02 engineering and technology ,State (functional analysis) ,symbols.namesake ,Hardware and Architecture ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Probability distribution ,020201 artificial intelligence & image processing ,Algorithm ,Recursive Bayesian estimation ,Software ,Information Systems ,Parametric statistics - Abstract
This paper presents a framework for state estimation which tolerates uncertainty in observation model parameters by (1) incorporating this uncertainty in state observation, and (2) correcting model parameters to improve future state observations. The first objective is met by an uncertainty propagation approach, while the second is achieved by gradient-descent optimization. The novel framework allows state estimates to be represented by non-Gaussian probability distribution functions. By correcting observation model parameters, estimation performance is enhanced since the accuracy of observations is increased. Monte Carlo simulation experiments validate the efficacy of the proposed approach in comparison with conventional estimation techniques, showing that as model parameters converge to ground-truth over time, state estimation correspondingly improves when compared to a static model estimate. Because observation models cannot be known with perfect accuracy and existing approaches do not address parametric uncertainties in non-Gaussian estimation, this work has both novelty and usefulness in most state estimation contexts.
- Published
- 2021
41. Semi-analytical assessment of the relative accuracy of the GNSS/INS in railway track irregularity measurements
- Author
-
Jingnan Liu, Quan Zhang, Qijin Chen, and Xiaoji Niu
- Subjects
Propagation of uncertainty ,Accuracy and precision ,Observational error ,Computer science ,Steady-state Kalman filter ,Monte Carlo method ,Error propagation modeling ,General Medicine ,Kalman filter ,Track (rail transport) ,Relative measurement accuracy ,Precise engineering surveying ,GNSS applications ,Inertial surveying ,T1-995 ,Algorithm ,Inertial navigation system ,Technology (General) - Abstract
An aided Inertial Navigation System (INS) is increasingly exploited in precise engineering surveying, such as railway track irregularity measurement, where a high relative measurement accuracy rather than absolute accuracy is emphasized. However, how to evaluate the relative measurement accuracy of the aided INS has rarely been studied. We address this problem with a semi-analytical method to analyze the relative measurement error propagation of the Global Navigation Satellite System (GNSS) and INS integrated system, specifically for the railway track irregularity measurement application. The GNSS/INS integration in this application is simplified as a linear time-invariant stochastic system driven only by white Gaussian noise, and an analytical solution for the navigation errors in the Laplace domain is obtained by analyzing the resulting steady-state Kalman filter. Then, a time series of the error is obtained through a subsequent Monte Carlo simulation based on the derived error propagation model. The proposed analysis method is then validated through data simulation and field tests. The results indicate that a 1 mm accuracy in measuring the track irregularity is achievable for the GNSS/INS integrated system. Meanwhile, the influences of the dominant inertial sensor errors on the final measurement accuracy are analyzed quantitatively and discussed comprehensively.
- Published
- 2021
42. A non-intrusive reduced-order modeling for uncertainty propagation of time-dependent problems using a B-splines Bézier elements-based method and proper orthogonal decomposition: Application to dam-break flows
- Author
-
Azzedine Abdedou and Azzeddine Soulaïmani
- Subjects
FOS: Computer and information sciences ,Propagation of uncertainty ,Polynomial chaos ,Artificial neural network ,Basis (linear algebra) ,Basis function ,Numerical Analysis (math.NA) ,02 engineering and technology ,Parameter space ,01 natural sciences ,Projection (linear algebra) ,Computational Engineering, Finance, and Science (cs.CE) ,010101 applied mathematics ,Computational Mathematics ,020303 mechanical engineering & transports ,0203 mechanical engineering ,Computational Theory and Mathematics ,Flow (mathematics) ,Modeling and Simulation ,FOS: Mathematics ,Applied mathematics ,Mathematics - Numerical Analysis ,0101 mathematics ,Computer Science - Computational Engineering, Finance, and Science ,Mathematics - Abstract
A proper orthogonal decomposition-based B-splines B\'ezier elements method (POD-BSBEM) is proposed as a non-intrusive reduced-order model for uncertainty propagation analysis for stochastic time-dependent problems. The method uses a two-step proper orthogonal decomposition (POD) technique to extract the reduced basis from a collection of high-fidelity solutions called snapshots. A third POD level is then applied on the data of the projection coefficients associated with the reduced basis to separate the time-dependent modes from the stochastic parametrized coefficients. These are approximated in the stochastic parameter space using B-splines basis functions defined in the corresponding B\'ezier element. The accuracy and the efficiency of the proposed method are assessed using benchmark steady-state and time-dependent problems and compared to the reduced order model-based artificial neural network (POD-ANN) and to the full-order model-based polynomial chaos expansion (Full-PCE). The POD-BSBEM is then applied to analyze the uncertainty propagation through a flood wave flow stemming from a hypothetical dam-break in a river with a complex bathymetry. The results confirm the ability of the POD-BSBEM to accurately predict the statistical moments of the output quantities of interest with a substantial speed-up for both offline and online stages compared to other techniques., Comment: 45 pages, 15 figures
- Published
- 2021
43. Nonlinear Kalman Filter-Based Robust Channel Estimation for High Mobility OFDM Systems
- Author
-
Guodong Sun, Zhirong Cai, Yong Liao, Zhen Huang, and Xuanfan Shen
- Subjects
Propagation of uncertainty ,Computer science ,Orthogonal frequency-division multiplexing ,Mechanical Engineering ,Kalman filter ,Multiplexing ,Computer Science Applications ,Computer Science::Robotics ,Extended Kalman filter ,Robustness (computer science) ,Automotive Engineering ,Fading ,Algorithm ,Computer Science::Information Theory ,Communication channel - Abstract
High-speed train (HST) and vehicle-to-vehicle (V2V), as typical scenes of 5G communication, have attracted extensive attention from academia and industry in recent years. Aiming at the channel characteristics of frequency-selective fading, fast time-varying and time-domain non-stationary in high mobility scenarios, a nonlinear Kalman filter-based high-speed channel estimation algorithm for orthogonal frequency-division multiplexing (OFDM) systems is proposed. We adopt basis expansion model (BEM) to eliminate the inter-subcarrier interference (ICI) caused by the fast time-varying characteristics. For the non-stationary characteristics of high mobility channel, a channel interpolation algorithm based on extended Kalman filter (EKF) is introduced to jointly estimate the channel impulse response (CIR) and time correlation coefficients. However, the EKF channel estimation uses structure of decision feedback to construct the observation matrix, which would lead to error propagation. This paper analyzes the generation of error propagation through theoretical derivation. Furthermore, for the error propagation of EKF, we introduce unscented Kalman filter (UKF) algorithm to perform Gaussian approximation of non-Gaussian observing system, and eliminate the influence by Kalman filter (KF). Simulation results demonstrate that the channel estimation accuracy of BEM-UKF is further improved compared with BEM-EKF, and the influence of pilot distance (PD) is smaller, which further improves the robustness of the algorithm.
- Published
- 2021
44. Gaussian sum reapproximation applied to the probability of collision calculations
- Author
-
Rui-dong Yan, Wang Ronglan, Liqin Shi, Jiancun Gong, and Siqing Liu
- Subjects
Atmospheric Science ,Propagation of uncertainty ,Propagation time ,Covariance matrix ,Gaussian ,Monte Carlo method ,Aerospace Engineering ,Astronomy and Astrophysics ,Kalman filter ,Covariance ,symbols.namesake ,Geophysics ,Gaussian elimination ,Space and Planetary Science ,symbols ,General Earth and Planetary Sciences ,Applied mathematics ,Mathematics - Abstract
In predicting the collision of space debris, the propagated orbital uncertainty may not follow a Gaussian distribution if the initial orbital uncertainty is large or the propagation time is long. In this paper, a Gaussian mixture uncertainty propagation method developed by (Psiaki et al., 2015) is used to calculate the collision probability. The initial Gaussian distribution is fitted by the weighted Gaussian mixture components. The linear matrix inequality is optimized to prevent the covariance matrix of Gaussian mixture components from being too small, and an appropriate number of Gaussian mixture components is used to approximate the initial orbital covariance. At the same time, this paper provides a method to calculate the collision probability of two objects in which a Gaussian mixture is used to represent the distribution of orbital uncertainty. The linear method and the unscented Kalman filter (UKF) method for propagating the Gaussian covariance are analysed. The results of numerical simulations show that compared with the linear covariance propagation method, UKF method, and high-precision Monte Carlo covariance propagation method for space objects with a large initial orbital uncertainty, the Gaussian mixture method can be effectively applied to capture the non-Gaussian characteristics of the predicted nonlinear orbital dynamic uncertainty. Compared with the univariate splitting method, the advantage of this Gaussian mixture method is that it does not need to search for the most nonlinear direction. The accuracy of the collision probability calculation is improved from 1.460 × 10−3 to 1.663 × 10−3. A comparison of the computational burden between the Gaussian mixture algorithm and Vittaldev’s algorithm to achieve the same results is presented. The calculation burden of the Gaussian mixture method is approximately 3 times that of the univariate Gaussian method.
- Published
- 2021
45. A Superpixel Guided Sample Selection Neural Network for Handling Noisy Labels in Hyperspectral Image Classification
- Author
-
Hongyan Zhang, Liangpei Zhang, and Huilin Xu
- Subjects
Propagation of uncertainty ,Artificial neural network ,business.industry ,Computer science ,Feature extraction ,Hyperspectral imaging ,Sample (statistics) ,Pattern recognition ,Backpropagation ,Support vector machine ,ComputingMethodologies_PATTERNRECOGNITION ,General Earth and Planetary Sciences ,Artificial intelligence ,Pruning (decision trees) ,Electrical and Electronic Engineering ,business - Abstract
Supervised hyperspectral image (HSI) classification has been widely studied and used in many different applications. However, the performance of the supervised classifiers, including the traditional machine learning methods and the deep neural networks, is significantly affected by the inaccurate labeling of training samples, which is a common problem in HSI supervised classification. In this article, we propose a superpixel guided sample selection neural network (S3Net) framework with end-to-end training for handling noisy labels in HSI classification. It includes two stages: sample selection and sample correction. In sample selection, a sample with a small training loss has a higher probability of being the correct label and hence selected from the noisy labels for model training. In order to avoid the error propagation caused by the noisy labels, we utilize a cross-selection update strategy that exchanges selected samples between two neural networks during conventional loss backpropagation. Sample selection is a pruning process, which may cause insufficient training sample problem in HSI classification. To solve this problem, we propose the sample correction strategy to correct the noisy labels by propagating clean label information in the homogeneous regions obtained by superpixel. Experimental results on three public HSI data sets demonstrate the effectiveness of the proposed S3Net framework when handling noisy labels.
- Published
- 2021
46. Probabilistic screening and behavior of solar cells under Gaussian parametric uncertainty using polynomial chaos representation model
- Author
-
Akshit Samadhiya and Kumari Namrata
- Subjects
Propagation of uncertainty ,Polynomial chaos ,Gaussian ,Probabilistic logic ,Probability density function ,General Medicine ,Computer Science::Digital Libraries ,symbols.namesake ,Latin hypercube sampling ,symbols ,Applied mathematics ,Probabilistic design ,Mathematics ,Parametric statistics - Abstract
The paper presents a hierarchical polynomial chaos expansion-based probabilistic approach to analyze the single diode solar cell model under Gaussian parametric uncertainty. It is important to analyze single diode solar cell model response under random events or factors due to uncertainty propagation. The optimal values of five electrical parameters associated with the single diode model are estimated using six deterministic optimization techniques through the root-mean-square minimization approach. Values corresponding to the best objective function response are further utilized to describe the probabilistic design space of each random electrical parameter under uncertainty. Adequate samples of each parameter corresponding Gaussian uncertain distribution are generated using Latin hypercube sampling. Furthermore, a multistage probabilistic approach is adopted to evaluate the model response using low-cost polynomial chaos series expansion and perform global sensitivity analysis under specified Gaussian distribution. Coefficients of polynomial basis functions are calculated using least square and least angle regression techniques. Unlike the highly non-linear and complex single diode representation of solar cells, the polynomial chaos expansion model provides a low computational burden and reduced complexity. To ensure reproducibility, probabilistic output response computed using proposed polynomial chaos expansion model is compared with the true model response. Finally, a multidimensional sensitivity analysis is performed through Sobol decomposition of polynomial chaos series representation to quantify the contribution of each parameter to the variance of the probabilistic response. The validation and assessment result shows that the output probabilistic response of the solar cell under Gaussian parametric uncertainty correlates to a Rayleigh probability distribution function. Output response is characterized by a mean value of 0.0060 and 0.0760 for RTC France and Solarex MSX83 solar cells, respectively. The standard deviation of $$ \pm $$ ± 0.0034 and $$ \pm $$ ± 0.0052 was observed in the probabilistic response for RTC France and Solarex MSX83 solar cells, respectively.
- Published
- 2021
47. A copula-based uncertainty propagation method for structures with correlated parametric p-boxes
- Author
-
Haibo Liu, Guilin She, Ming Chen, Jiachang Tang, Chong Du, and Chunming Fu
- Subjects
Propagation of uncertainty ,Estimation theory ,Applied Mathematics ,Cumulative distribution function ,Function (mathematics) ,Theoretical Computer Science ,Copula (probability theory) ,Artificial Intelligence ,Applied mathematics ,Uncertainty quantification ,Akaike information criterion ,Software ,Parametric statistics ,Mathematics - Abstract
In the response analysis of uncertain structural models with limited information, probability-boxes can be effectively employed to address the aleatory and epistemic uncertainty together. This paper presents a copula-based uncertainty propagation method which can accurately perform uncertainty propagation analysis with correlated parametric probability-boxes. Firstly, the parameter estimation and Akaike information criterion analysis are utilized to select an optimal copula based on available samples, by which the joint cumulative distribution function is constructed for the correlated input variables. Then, using the obtained joint cumulative distribution function, the correlated parametric probability-boxes are transformed into independent normal variables, and an efficient method based on sparse grid numerical integration is proposed to calculate the bounds on statistical moments of a response function. Finally, numerical examples and an engineering application are provided to verify the effectiveness of the presented method.
- Published
- 2021
48. Practical 3D human skeleton tracking based on multi-view and multi-Kinect fusion
- Author
-
Ching-Chun Hsiao, Wen-Huang Cheng, Ching-Chun Huang, and Manh-Hung Nguyen
- Subjects
Propagation of uncertainty ,Computer Networks and Communications ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Probabilistic logic ,Tracking system ,Skeleton (category theory) ,Tracking (particle physics) ,Computer graphics ,Consistency (database systems) ,Human skeleton ,medicine.anatomical_structure ,Hardware and Architecture ,Media Technology ,medicine ,Computer vision ,Artificial intelligence ,business ,Software ,ComputingMethodologies_COMPUTERGRAPHICS ,Information Systems - Abstract
In this paper, we proposed a multi-view system for 3D human skeleton tracking based on multi-cue fusion. Multiple Kinect version 2 cameras are applied to build up a low-cost system. Though Kinect cameras can detect 3D skeleton from their depth sensors, some challenges of skeleton extraction still exist, such as left–right confusion and severe self-occlusion. Moreover, human skeleton tracking systems often have difficulty in dealing with lost tracking. These challenges make robust 3D skeleton tracking nontrivial. To address these challenges in a unified framework, we first correct the skeleton's left–right ambiguity by referring to the human joints extracted by OpenPose. Unlike Kinect, and OpenPose extracts target joints by learning-based image analysis to differentiate a person's front side and backside. With help from 2D images, we can correct the left–right skeleton confusion. On the other hand, we find that self-occlusion severely degrades Kinect joint detection owing to incorrect joint depth estimation. To alleviate the problem, we reconstruct a reference 3D skeleton by back-projecting the corresponding 2D OpenPose joints from multiple cameras. The reconstructed joints are less sensitive to occlusion and can be served as 3D anchors for skeleton fusion. Finally, we introduce inter-joint constraints into our probabilistic skeleton tracking framework to trace all joints simultaneously. Unlike conventional methods that treat each joint individually, neighboring joints are utilized to position each other. In this way, when joints are missing due to occlusion, the inter-joint constraints can ensure the skeleton consistency and preserve the length between neighboring joints. In the end, we evaluate our method with five challenging actions by building a real-time demo system. It shows that the system can track skeletons stably without error propagation and vibration. The experimental results also reveal that the average localization error is smaller than that of conventional methods.
- Published
- 2021
49. An adaptive Gaussian mixture method for nonlinear uncertainty propagation in neural networks
- Author
-
Yung C. Shin and Bin Zhang
- Subjects
0209 industrial biotechnology ,Propagation of uncertainty ,Artificial neural network ,Computer science ,Cognitive Neuroscience ,Gaussian ,02 engineering and technology ,Bottleneck ,Computer Science Applications ,Moment (mathematics) ,Set (abstract data type) ,Nonlinear system ,symbols.namesake ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,020201 artificial intelligence & image processing ,Algorithm ,Finite set - Abstract
Using neural networks to address data-driven problems often entails dealing with uncertainties. However, the propagation of uncertainty through a network’s nonlinear layers is usually a bottleneck, since the existing techniques designed to transmit Gaussian distributions via moment estimation are not capable of predicting non-Gaussian distributions. In this study, a Gaussian-mixture-based uncertainty propagation scheme is proposed for neural networks. Given that any input uncertainty can be characterized as a Gaussian mixture with a finite number of components, the developed scheme actively examines each mixture component and adaptively split those whose fidelity in representing uncertainty is deteriorated by the network’s nonlinear activation layers. A Kullback–Leibler criterion that directly measures the nonlinearity-induced non-Gaussianity in post-activation distributions is derived to trigger splitting and a set of high-precision Gaussian splitting libraries is established. Four uncertainty propagation examples on dynamic systems and data-driven applications are demonstrated, in all of which the developed scheme exhibited exemplary fidelity and efficiency in predicting the evolution of non-Gaussian distributions through both recurrent and multi-layer neural networks.
- Published
- 2021
50. The sensitivity of power system expansion models
- Author
-
Alexander Kies, Bruno U. Schyska, Wided Medjroubi, Lueder von Bremen, and Markus Schlott
- Subjects
Physics - Physics and Society ,Propagation of uncertainty ,Mathematical optimization ,Computer science ,FOS: Physical sciences ,Robust optimization ,Physics and Society (physics.soc-ph) ,Applied Physics (physics.app-ph) ,Physics - Applied Physics ,Transparency (human–computer interaction) ,Electric power system ,energy systems analysis uncertainty reliability validation robust optimization metric modeling to generate alternatives transparency sensitivity analysis temporal resolution ,General Energy ,Temporal resolution ,Metric (mathematics) ,Sensitivity (control systems) ,Reliability (statistics) - Abstract
Summary Optimization models are a widely used tool in academia. In order to build these models, various parameters need to be specified, and often, simplifications are necessary to ensure the tractability of the models; both of which introduce uncertainty about the model results. However, a widely accepted way to quantify how these uncertainties propagate does not exist. Using the example of power system expansion modeling, we show that uncertainty propagation in optimization models can systematically be described by quantifying the sensitivity to different model parameters and model designs. We quantify the sensitivity based on a misallocation measure with clearly defined mathematical properties for two prominent examples: the cost of capital and different model resolutions. When used to disclose sensitivity information in power system studies our approach can contribute to openness and transparency in power system research. It is found that power system models are particularly sensitive to the temporal resolution of the underlying time series.
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.