107 results on '"error modelling"'
Search Results
2. Optimal estimated standard uncertainties of reflection intensities for kinematical refinement from 3D electron diffraction data.
- Author
-
Khouchen, Malak, Klar, Paul Benjamin, Chintakindi, Hrushikesh, Suresh, Ashwin, and Palatinus, Lukas
- Subjects
- *
ELECTRON diffraction , *ESTIMATION theory , *DATA reduction , *DATA quality - Abstract
Estimating the error in the merged reflection intensities requires a full understanding of all the possible sources of error arising from the measurements. Most diffraction‐spot integration methods focus mainly on errors arising from counting statistics for the estimation of uncertainties associated with the reflection intensities. This treatment may be incomplete and partly inadequate. In an attempt to fully understand and identify all the contributions to these errors, three methods are examined for the correction of estimated errors of reflection intensities in electron diffraction data. For a direct comparison, the three methods are applied to a set of organic and inorganic test cases. It is demonstrated that applying the corrections of a specific model that include terms dependent on the original uncertainty and the largest intensity of the symmetry‐related reflections improves the overall structure quality of the given data set and improves the final Rall factor. This error model is implemented in the data reduction software PETS2. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Reference-free Bayesian model for pointing errors of typein neurosurgical planning.
- Author
-
Baxter, John S. H., Croci, Stéphane, Delmas, Antoine, Bredoux, Luc, Lefaucheur, Jean-Pascal, and Jannin, Pierre
- Abstract
Purpose: Many neurosurgical planning tasks rely on identifying points of interest in volumetric images. Often, these points require significant expertise to identify correctly as, in some cases, they are not visible but instead inferred by the clinician. This leads to a high degree of variability between annotators selecting these points. In particular, errors of type are when the experts fundamentally select different points rather than the same point with some inaccuracy. This complicates research as their mean may not reflect any of the experts' intentions nor the ground truth. Methods: We present a regularised Bayesian model for measuring errors of type in pointing tasks. This model is reference-free; in that it does not require a priori knowledge of the ground truth point but instead works on the basis of the level of consensus between multiple annotators. We apply this model to simulated data and clinical data from transcranial magnetic stimulation for chronic pain. Results: Our model estimates the probabilities of selecting the correct point in the range of 82.6 - 88.6% with uncertainties in the range of 2.8 - 4.0%. This agrees with the literature where ground truth points are known. The uncertainty has not previously been explored in the literature and gives an indication of the dataset's strength. Conclusions: Our reference-free Bayesian framework easily models errors of type in pointing tasks. It allows for clinical studies to be performed with a limited number of annotators where the ground truth is not immediately known, which can be applied widely for better understanding human errors in neurosurgical planning. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Using Digital Twins in the Development of Complex Dependable Real-Time Embedded Systems
- Author
-
Dai, Xiaotian, Zhao, Shuai, Lesage, Benjamin, Bate, Iain, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, and Margaria, Tiziana, editor
- Published
- 2022
- Full Text
- View/download PDF
5. A Novel Algorithm Modelling for UWB Localization Accuracy in Remote Sensing.
- Author
-
Yu, Zhengyu, Chaczko, Zenon, and Shi, Jiajia
- Subjects
- *
REMOTE sensing , *LOCALIZATION (Mathematics) , *TECHNOLOGICAL innovations , *KALMAN filtering , *ALGORITHMS , *CURVE fitting - Abstract
At present, the ultra-wideband (UWB) technology plays a vital role in the environment of indoor localization. As a new technology of wireless communications, UWB has many advantages, such as high accuracy, strong anti-multipath ability, and high transmission rate. However, in real-time operation, the accuracy of UWB is reduced by multi-sensor interference, antenna variations and system operation noise. We have developed a novel error modelling based on the curve fitted Kalman filter (CFKF) algorithm to solve these issues. This paper involves investigating and developing the error modelling algorithm that can calibrate the signal sensors, reduce the errors, and mitigate noise levels and interference signals. As part of the research investigation, a range of experiments was executed to validate the CFKF error modelling approach's accuracy, reliability and viability. The experimental results indicate that this novel approach significantly improves the accuracy and precision of beacon-based localization. Validation tests also show that the CFKF error modelling method can improve the localization accuracy of UWB-based solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. Optimal Design Method for Static Precision of Heavy-Duty Vertical Machining Center Based on Gravity Deformation Error Modelling.
- Author
-
Wang, Han, Li, Tianjian, Sun, Xizhi, Mynors, Diane, and Wu, Tao
- Subjects
CENTER of mass ,DEFORMATIONS (Mechanics) ,MACHINE tools ,MULTIBODY systems ,MACHINE theory ,ELECTRIC discharges - Abstract
Due to the large size and large span of heavy-duty machine tools, the structural deformation errors caused by gravity account for a large proportion of the static errors, and the influence of gravity deformation must thus be considered in the machine tool precision design. This paper proposes a precision design method for heavy-duty vertical machining centers based on gravity deformation error modelling. By abstracting the machine tool into a multibody system topology, the static error model of the machine tool is established based on the multibody system theory and a homogeneous coordinate transformation. Assuming that the static error of each motion axis is composed of two parts, i.e., the manufacturing-induced geometric error and the gravity deformation error, the machine tool stiffness model of the relationship between gravity and deformation error is developed using the spatial beam elements. In the modelling process, the stiffness coefficients and volume coefficients of the components are introduced to fully consider the influences of structural parameters on machine tool precision. Taking the machine tool static precision, the component stiffness coefficients and the volume coefficients as the design variables, based on the use of the worst condition method, error sensitivity analysis and global optimization algorithm, the optimal allocation of the static error budget of the machine tool and the structural design requirements of each component are determined, providing a valuable guide for the detailed structure design and manufacture processing of the machine tool components. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. Model for surface topography prediction in the ultra-precision grinding of silicon wafers considering volumetric errors.
- Author
-
Cai, Yindi, Yang, Yang, Wang, Yuxuan, Wang, Ronghao, Zhu, Xianglong, and Kang, Renke
- Subjects
- *
SURFACE topography , *SILICON wafers , *GRINDING machines , *LASER measurement , *MEASUREMENT errors , *PREDICTION models - Abstract
• A model for wafers surface topography prediction in the ultra-precision grinding considering volumetric errors is proposed. • An HTM-based volumetric error model is optimized considering Abbe and Bryan errors induced while measuring errors. • On-machine measurement of translational axes' geometric errors is realized using an author's group developed system. • The predicted and experimental wafer surface topographies are in reasonable agreement with each other. Ultra-precision grinding machine is widely used for thinning silicon wafers. The wafer surface quality is considerably affected by volumetric errors at the functional point of the ultra-precision grinding machine. Therefore, a volumetric error model is established based on the homogeneous transformation matrix (HTM) method. The model is, then, optimized by considering Abbe and Bryan errors induced volumetric errors during the error measurements process. A prediction model for the wafer surface topography is established by combining the manufacturing mechanism of the ultra-precision grinding and the optimized volumetric error model of the ultra-precision grinding machine. Volumetric errors are detected using an author's group developed laser measurement system and a commercial spindle error analyzer. The model enables to predict the topography and total thickness variation (TTV) of the wafer surface. A group of grinding experiments are performed to verify the effectiveness of the proposed models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Research on geometric error modelling and decoupling of machine tool based on NURBS projection curve.
- Author
-
Li, Wei, Feng, Yanyan, Zhang, Shihui, and Zuo, Wei
- Subjects
- *
GEOMETRIC modeling , *MATHEMATICAL decoupling , *MACHINE tools , *MEASURING instruments - Abstract
As a geometric measuring instrument of machine tool, double ball bar (DBB) instrument can effectively identify the geometric error of machine tool. In the existing literature, DBB mostly uses the two-dimensional circle as the identification trajectory to identify the geometric error continuously, and the most efficient way of single recognition is still in two-dimensional plane. Aiming at the low efficiency of DBB spatial geometric error recognition, the recognition range of DBB is extended from two-dimensional plane to spatial hemisphere. A method of space recognition and decoupling based on NURBS projection curve is proposed in this paper. The error decoupling of feed axis and rotation axis is realized in an experiment. The identification method has the characteristics of high efficiency, no secondary assembly and avoiding interference. It can decouple all the five-axis in hemispherical space at one time and get the effective solution. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Robust Control
- Author
-
Williams, Noah and Macmillan Publishers Ltd
- Published
- 2018
- Full Text
- View/download PDF
10. Error Modelling and Sensitivity Analysis of a Planar 3-PRP Parallel Manipulator
- Author
-
Mohanta, Jayant K., Mohan, Santhakumar, Huesing, Mathias, Corves, Burkhard, Ceccarelli, Marco, Series editor, Corves, Burkhard, Advisory editor, Takeda, Yukio, Advisory editor, Zeghloul, Saïd, editor, Romdhane, Lotfi, editor, and Laribi, Med Amine, editor
- Published
- 2018
- Full Text
- View/download PDF
11. A Novel Algorithm Modelling for UWB Localization Accuracy in Remote Sensing
- Author
-
Zhengyu Yu, Zenon Chaczko, and Jiajia Shi
- Subjects
ultra-wideband (UWB) ,localization ,Kalman filter (KF) ,error modelling ,Internet of Things (IoT) ,Science - Abstract
At present, the ultra-wideband (UWB) technology plays a vital role in the environment of indoor localization. As a new technology of wireless communications, UWB has many advantages, such as high accuracy, strong anti-multipath ability, and high transmission rate. However, in real-time operation, the accuracy of UWB is reduced by multi-sensor interference, antenna variations and system operation noise. We have developed a novel error modelling based on the curve fitted Kalman filter (CFKF) algorithm to solve these issues. This paper involves investigating and developing the error modelling algorithm that can calibrate the signal sensors, reduce the errors, and mitigate noise levels and interference signals. As part of the research investigation, a range of experiments was executed to validate the CFKF error modelling approach’s accuracy, reliability and viability. The experimental results indicate that this novel approach significantly improves the accuracy and precision of beacon-based localization. Validation tests also show that the CFKF error modelling method can improve the localization accuracy of UWB-based solutions.
- Published
- 2022
- Full Text
- View/download PDF
12. Systematic approach to realizing optimal moving order selection for multi-axis precise motion system.
- Author
-
Tang, Hao and Zhang, Zilin
- Subjects
- *
MOTION , *ALGORITHMS , *TEST methods - Abstract
This paper presents a systematic approach for selecting the optimal moving order (MO) in a multi-axis precise motion system (MPMS). According to the proposed procedure, an optimal high-efficiency MO selection approach for high-efficiency and high-accuracy requirement is introduced. First, the characteristics of the giving MPMS are analysed, and the error model is established. The orthogonal test method is used to evaluate the influence on the MO caused by different MPMS configurations. Second, the number of possible MOs can be narrowed to limited and satisfied MO types. By calculating the deviations after each step, all directional movements can be arranged. Third, considering that both the accuracy and efficiency are important indexes, a series of systematic formulations are developed to select the optimal MO to balance accuracy and efficiency. A case for which a six-axis precise platform is adopted in an optoelectronic packaging system is implemented, and the methods of high-quality MO selection are verified by performing a series of experiments, and the methods are shown to be useful and effective. To balance the proportion of efficiency and accuracy, the formula and corresponding model are proposed to select the MO. The approach is not only beneficial to the accuracy improvement and trajectory planning of MPMS, but also helpful in terms of reducing the computational processing for the following algorithm. For the engineers using in precise industry area, the proposed approach can significantly improve the operative precision of MPMS with an optimal MO. This methodology of MO can also be the basis of references to error-related analyses on MPMS. • This paper introduced a novel moving order selection approach for a multi-axis precise motion platform by searching the minimum deviation DOF-by-DOF. • The approach provides corresponding formulas for balancing the efficiency and accuracy with different requirement. • A case indicates that the approach can save the computational volume greatly in moving order finding. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
13. Uncertainty model for a traceable stereo-photogrammetry system.
- Author
-
Sims-Waterhouse, Danny, Isa, Mohammed, Piano, Samanta, and Leach, Richard
- Subjects
- *
MONTE Carlo method , *UNCERTAINTY , *LENGTH measurement , *LASER interferometers , *VOLUME measurements , *INTERFEROMETERS - Abstract
Through the computational modelling and experimental verification of a stereo-photogrammetry system, the expanded uncertainty on form measurement was estimated and was found to be 32 μ m , 12 μ m and 29 μ m for a 95% confidence interval (coverage factors of k = 3.2, 2.0 and 2.0 respectively) in the x, y and z axis respectively. The contribution of systematic offsets in the system properties was also investigated, demonstrating the complex distortions of the measurement volume that result from these systematic errors. Additionally, a traceable method of applying a scale factor to the reconstruction was demonstrated using a laser interferometer and gauge block. The relative standard uncertainty on the size of the measurements was estimated to be 0.007% corresponding to length measurement uncertainties of around 7 μ m over a 100 mm range. Finally, the residuals from a linear fit of the scale factor were found to exhibit behaviour that would be expected to result from small offsets in the system properties. The outcome of this work is a better understanding of the propagation of uncertainty through the stereo-photogrammetry system as well as highlighting key influence factors that must be addressed in future work. • Monte Carlo simulation of a stereo-photogrammetry system. • Traceable calibration of stereo-photogrammetry measurements. • Effect of Systematic errors on stereo-photogrammetry is demonstrated and verified. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
14. Sources of uncertainties for total chloride profile measurements in concrete: quantization and impact on probability assessment of corrosion initiation.
- Author
-
Bonnet, Stéphanie, Schoefs, Franck, and Salta, Manuela
- Subjects
- *
ERRORS-in-variables models , *REINFORCED concrete corrosion , *MEASUREMENT errors , *CHLORIDES , *REINFORCED concrete , *PROBABILISTIC number theory , *PROBABILITY theory - Abstract
Reliability methods have proved in the past that they were rational aid-tools for the safety assessment of existing structures, within which some uncertainties occurred. Condition assessment is usually carried out using on-site measurements, which are assumed perfect. However, it is now accepted that some significant uncertainties may affect the assessment of material properties using semi-destructive methods. The purpose of this paper is to present a method for the identification and evaluation of measurement uncertainties using a bias and a zero mean error modelled by a random variable. These uncertainties obtained are then modelled using a probabilistic model. In a marine environment, the main cause of reinforced concrete structure degradation is the corrosion due to chloride ingress. The chloride profiles are determined using a destructive method involving many steps where the experimenter plays a key role. In order to identify sources of errors, four researchers have performed repeatability tests. The total chloride content is expected to be the same for all the samples. The heterogeneity has been studied using statistical analysis. A value of the bias is provided and the model results are consistent with the original results. Finally, the impact of measurement errors on reliability and life-cycle assessment is discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. Error modelling–based machining sequence optimization of a pocketed beam milling: part A, end supported beam
- Author
-
Yao, Shaoming
- Published
- 2021
- Full Text
- View/download PDF
16. Optimal Design Method for Static Precision of Heavy-Duty Vertical Machining Center Based on Gravity Deformation Error Modelling
- Author
-
Han Wang, Tianjian Li, Xizhi Sun, Diane Mynors, and Tao Wu
- Subjects
gravity deformation ,error modelling ,Process Chemistry and Technology ,Chemical Engineering (miscellaneous) ,static precision design ,Bioengineering ,heavy-duty vertical machining center - Abstract
Due to the large size and large span of heavy-duty machine tools, the structural deformation errors caused by gravity account for a large proportion of the static errors, and the influence of gravity deformation must thus be considered in the machine tool precision design. This paper proposes a precision design method for heavy-duty vertical machining centers based on gravity deformation error modelling. By abstracting the machine tool into a multibody system topology, the static error model of the machine tool is established based on the multibody system theory and a homogeneous coordinate transformation. Assuming that the static error of each motion axis is composed of two parts, i.e., the manufacturing-induced geometric error and the gravity deformation error, the machine tool stiffness model of the relationship between gravity and deformation error is developed using the spatial beam elements. In the modelling process, the stiffness coefficients and volume coefficients of the components are introduced to fully consider the influences of structural parameters on machine tool precision. Taking the machine tool static precision, the component stiffness coefficients and the volume coefficients as the design variables, based on the use of the worst condition method, error sensitivity analysis and global optimization algorithm, the optimal allocation of the static error budget of the machine tool and the structural design requirements of each component are determined, providing a valuable guide for the detailed structure design and manufacture processing of the machine tool components.
- Published
- 2022
- Full Text
- View/download PDF
17. MODELLING ERRORS IN X-RAY FLUOROSCOPIC IMAGING SYSTEMS USING PHOTOGRAMMETRIC BUNDLE ADJUSTMENT WITH A DATA-DRIVEN SELFCALIBRATION APPROACH.
- Author
-
Chow, J. C. K., Lichti, D. D., Ang, K. D., Al-Durgham, K., Kuntze, G., Sharma, G., and Ronsky, J.
- Subjects
MODELS & modelmaking ,IMAGING systems ,QUALITY control - Abstract
X-ray imaging is a fundamental tool of routine clinical diagnosis. Fluoroscopic imaging can further acquire X-ray images at video frame rates, thus enabling non-invasive in-vivo motion studies of joints, gastrointestinal tract, etc. For both the qualitative and quantitative analysis of static and dynamic X-ray images, the data should be free of systematic biases. Besides precise fabrication of hardware, software-based calibration solutions are commonly used for modelling the distortions. In this primary research study, a robust photogrammetric bundle adjustment was used to model the projective geometry of two fluoroscopic X-ray imaging systems. However, instead of relying on an expert photogrammetrist's knowledge and judgement to decide on a parametric model for describing the systematic errors, a self-tuning data-driven approach is used to model the complex non-linear distortion profile of the sensors. Quality control from the experiment showed that 0.06 mm to 0.09 mm 3D reconstruction accuracy was achievable postcalibration using merely 15 X-ray images. As part of the bundle adjustment, the location of the virtual fluoroscopic system relative to the target field can also be spatially resected with an RMSE between 3.10 mm and 3.31 mm. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
18. Error modelling and motion reliability analysis of a planar parallel manipulator with multiple uncertainties.
- Author
-
Zhan, Zhenhui, Zhang, Xianmin, Jian, Zhicong, and Zhang, Haodong
- Subjects
- *
MANIPULATORS (Machinery) , *STRUCTURAL dynamics , *MOTION control devices , *KINEMATICS , *MICROMACHINING - Abstract
The inherent uncertainties of a manipulator, including manufacturing tolerances, input errors and joint clearances, cause deviations between the actual motion and the expected motion, leading to a motion reliability problem. This paper focuses on the motion reliability of a planar 3- R RR parallel manipulator with multiple uncertainties. First, the error model of the manipulator is built. Then, an analytical method is presented to verify its validation and accuracy. To address the complexity of the motion in a journal-bearing joint, the joint clearance parameters are modelled as interval variables while other parameters are treated as random variables. A new hybrid approach to motion reliability analysis based on the first order second moment (FOSM) method and the Monte Carlo simulation (MCS) method is developed for the manipulator with both random and interval variables. This method has an easier simulation process than that of the conventional MCS method using direct kinematics. Compared to the probability method with random variables, the proposed hybrid method has a higher confidence estimate of motion reliability for the manipulator with joint clearances. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
19. Geometric Error Measurement and Identification for Rotational Axes of a Five-Axis CNC Machine Tool.
- Author
-
Yanbing Ni, Xiance Liu, Biao Zhang, Zhiwen Zhang, and Jinhe Li
- Subjects
- *
MEASUREMENT errors , *NUMERICAL control of machine tools , *COORDINATE transformations , *MACHINING , *SENSITIVITY analysis - Abstract
It is difficult to measure the attitude errors of five-axis machine tools, as it requires expensive instruments. Furthermore, it is hard to measure and identify them at production sites. In this paper, the B/C type of five-axis CNC machine tool is studied; the rotational axes are in the cutting tool movement chain and workpiece movement chain. The method of geometric error measurement and identification of the rotational axes is proposed based on the use of a double ball bar (DBB). The geometric error sources in two rotational axes are analysed. Then, based on the homogeneous coordinate transformation principle, the relationships between the geometric errors of each rotational axis and the structure parameters of the machine tool are established on the C turntable and B swinging head, respectively. Identification models for each axis are built by changing the installation modes of the DBB in three coordinate directions. The geometric errors in the rotational axes are identified by the least-squares method. Finally, the error measurement experiment of the machine tool is carried out for different installation modes of a DBB. The experimental results show that the identified error parameters are consistent with actual conditions and verify the validity and feasibility of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
20. Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation.
- Author
-
Ruotsalainen, Laura, Kirkko-Jaakkola, Martti, Rantanen, Jesperi, and Mäkelä, Maija
- Subjects
- *
MICROELECTROMECHANICAL systems , *PATTERN recognition systems , *ALGORITHMS , *RANDOM noise theory , *KALMAN filtering , *MULTISENSOR data fusion - Abstract
The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM) and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU), sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS) sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF), which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf) in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is tested via two experiments, one at a university’s premises and another in realistic tactical conditions. The results show significant improvement on the horizontal localization when the measurement errors are carefully modelled and their inclusion into the particle filtering implementation correctly realized. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
21. A Novel Gravity Compensation Method for High Precision Free-INS Based on "Extreme Learning Machine".
- Author
-
Xiao Zhou, Gongliu Yang, Qingzhong Cai, and Jing Wang
- Subjects
- *
INERTIAL navigation systems , *STANDARD deviations , *MACHINE learning , *ACCELEROMETERS , *GRAVITY - Abstract
In recent years, with the emergency of high precision inertial sensors (accelerometers and gyros), gravity compensation has become a major source influencing the navigation accuracy in inertial navigation systems (INS), especially for high-precision INS. This paper presents preliminary results concerning the effect of gravity disturbance on INS. Meanwhile, this paper proposes a novel gravity compensation method for high-precision INS, which estimates the gravity disturbance on the track using the extreme learning machine (ELM) method based on measured gravity data on the geoid and processes the gravity disturbance to the height where INS has an upward continuation, then compensates the obtained gravity disturbance into the error equations of INS to restrain the INS error propagation. The estimation accuracy of the gravity disturbance data is verified by numerical tests. The root mean square error (RMSE) of the ELM estimation method can be improved by 23% and 44% compared with the bilinear interpolation method in plain and mountain areas, respectively. To further validate the proposed gravity compensation method, field experiments with an experimental vehicle were carried out in two regions. Test 1 was carried out in a plain area and Test 2 in a mountain area. The field experiment results also prove that the proposed gravity compensation method can significantly improve the positioning accuracy. During the 2-h field experiments, the positioning accuracy can be improved by 13% and 29% respectively, in Tests 1 and 2, when the navigation scheme is compensated by the proposed gravity compensation method. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
22. Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation
- Author
-
Laura Ruotsalainen, Martti Kirkko-Jaakkola, Jesperi Rantanen, and Maija Mäkelä
- Subjects
error modelling ,sensor fusion ,indoor positioning ,particle filtering ,Chemical technology ,TP1-1185 - Abstract
The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM) and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU), sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS) sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF), which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf) in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is tested via two experiments, one at a university’s premises and another in realistic tactical conditions. The results show significant improvement on the horizontal localization when the measurement errors are carefully modelled and their inclusion into the particle filtering implementation correctly realized.
- Published
- 2018
- Full Text
- View/download PDF
23. A Novel Algorithm Modelling for UWB Localization Accuracy in Remote Sensing
- Author
-
Zenon Chaczko, Zhengyu Yu, and Jiajia Shi
- Subjects
0203 Classical Physics, 0406 Physical Geography and Environmental Geoscience, 0909 Geomatic Engineering ,ultra-wideband (UWB) ,localization ,Kalman filter (KF) ,error modelling ,Internet of Things (IoT) ,General Earth and Planetary Sciences - Abstract
At present, the ultra-wideband (UWB) technology plays a vital role in the environment of indoor localization. As a new technology of wireless communications, UWB has many advantages, such as high accuracy, strong anti-multipath ability, and high transmission rate. However, in real-time operation, the accuracy of UWB is reduced by multi-sensor interference, antenna variations and system operation noise. We have developed a novel error modelling based on the curve fitted Kalman filter (CFKF) algorithm to solve these issues. This paper involves investigating and developing the error modelling algorithm that can calibrate the signal sensors, reduce the errors, and mitigate noise levels and interference signals. As part of the research investigation, a range of experiments was executed to validate the CFKF error modelling approach’s accuracy, reliability and viability. The experimental results indicate that this novel approach significantly improves the accuracy and precision of beacon-based localization. Validation tests also show that the CFKF error modelling method can improve the localization accuracy of UWB-based solutions.
- Published
- 2022
24. Active disturbance rejection and predictive control strategy for a quadrotor helicopter.
- Author
-
Ma, Dailiang, Xia, Yuanqing, Li, Tianya, and Chang, Kai
- Abstract
In this study, an active disturbance rejection and predictive control strategy is presented to solve the trajectory tracking problem for an unmanned quadrotor helicopter with disturbances. The proposed control scheme is based on the quadrotor's dynamic model, where effects of wind gust are considered as additive disturbances on six degrees of freedom. The predictive controller solves the path following problem with extended state observers to estimate and compensate disturbances. The active disturbance rejection control scheme is used for the stabilisation of rotational movements. The suggested control structure is verified in simulation studies with the presence of external disturbances and parametric uncertainties. The proposed method improves the robustness for the modelling error and disturbances while performing a smooth tracking of the reference trajectory. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
25. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.
- Author
-
Qingzhong Cai, Gongliu Yang, Ningfang Song, and Yiliang Liu
- Subjects
- *
INERTIAL navigation systems , *GLOBAL Positioning System , *GYROSCOPES , *DETECTORS , *CALIBRATION , *SIMULATION methods & models - Abstract
An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10-6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
26. Theoretical Limitations of Allan Variance-based Regression for Time Series Model Estimation.
- Author
-
Guerrier, Stephane, Molinari, Roberto, and Stebler, Yannick
- Subjects
REGRESSION analysis ,TIME series analysis ,ANALYSIS of variance ,STOCHASTIC processes ,EMAIL - Abstract
This letter formally proves the statistical inconsistency of the Allan variance-based estimation of latent (composite) model parameters. This issue has not been sufficiently investigated and highlighted since it is a technique that is still being widely used in practice, especially within the engineering domain. Indeed, among others, this method is frequently used for inertial sensor calibration, which often deals with latent time series models and practitioners in these domains are often unaware of its limitations. To prove the inconsistency of this method, we first provide a formal definition and subsequently deliver its theoretical properties, highlighting its limitations by comparing it with another statistically sound method. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
27. On the Design of Attitude-Heading Reference Systems Using the Allan Variance.
- Author
-
Hidalgo-Carrio, Javier, Arnold, Sascha, and Poulakis, Pantelis
- Subjects
- *
VARIANCES , *STOCHASTIC analysis , *INERTIAL navigation systems , *ROBUST control , *AUTONOMOUS underwater vehicles - Abstract
The Allan variance is a method to characterize stochastic random processes. The technique was originally developed to characterize the stability of atomic clocks and has also been successfully applied to the characterization of inertial sensors. Inertial navigation systems (INS) can provide accurate results in a short time, which tend to rapidly degrade in longer time intervals. During the last decade, the performance of inertial sensors has significantly improved, particularly in terms of signal stability, mechanical robustness, and power consumption. The mass and volume of inertial sensors have also been significantly reduced, offering system-level design and accommodation advantages. This paper presents a complete methodology for the characterization and modeling of inertial sensors using the Allan variance, with direct application to navigation systems. Although the concept of sensor fusion is relatively straightforward, accurate characterization and sensor-information filtering is not a trivial task, yet they are essential for good performance. A complete and reproducible methodology utilizing the Allan variance, including all the intermediate steps, is described. An end-to-end (E2E) process for sensor-error characterization and modeling up to the final integration in the sensor-fusion scheme is explained in detail. The strength of this approach is demonstrated with representative tests on novel, high-grade inertial sensors. Experimental navigation results are presented from two distinct robotic applications: a planetary exploration rover prototype and an autonomous underwater vehicle (AUV). [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
28. Modelling errors in a biometric re‐identification system.
- Author
-
DeCann, Brian and Ross, Arun
- Abstract
The authors consider the problem of 're‐identification' where a biometric system answers the question 'Has this person been encountered before?' without actually deducing the person's identity. Such a system is vital in biometric surveillance applications and applicable to biometric de‐duplication. In such a system, identifiers are created dynamically as and when the system encounters an input probe. Consequently, multiple probes of the same identity may be mistakenly assigned different identifiers, whereas probes from different identities may be mistakenly assigned the same identifier. In this study, they describe a re‐identification system and develop terminology as well as mathematical expressions for prediction of matching errors. Furthermore, they demonstrate that the sequential order in which the probes are encountered by the system has a great impact on its matching performance. Experimental analysis based on unimodal and multimodal faces and fingerprint scores confirms the validity of the designed error prediction model, as well as demonstrates that traditional metrics for biometric recognition fail to accurately characterise the error dynamics of a re‐identification system. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
29. Relative Contribution and Effect of Various Error Sources on the Performance of Mobile Mapping System (MMS).
- Author
-
Goel, Salil and Lohani, Bharat
- Abstract
Mobile Mapping Systems (MMSs) are made using sensors like, GPS, IMU and laser scanner. These systems are considered fastest and most reliable to capture high resolution and accurate three-dimensional information along a corridor. Quite a number of researchers have attempted to verify the accuracy of MMS through field experiments. However, there is limited literature available explaining the contribution of different sensor components to the total error budget or providing a statistically accurate estimate of total uncertainty in data. Further, there is no published literature available that can guide towards the selection of optimal set of sensors for developing an MMS to generate data of desired quality. This research paper demonstrates the effect of inherent sensor uncertainties on the overall error budget of an MMS. These errors are studied in isolation and in coupling with other errors. Further, a few guidelines are provided for selection of optimal sensor components and how these components control the system accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
30. Component-Based Transfer Path Analysis and Hybrid Substructuring at high frequencies : A treatise on error modelling in Transfer Path Analysis
- Author
-
Venugopal, Harikrishnan and Venugopal, Harikrishnan
- Abstract
The field of modal testing and analysis is currently facing a surge of interest in error modelling. Several errors which occur during testing campaigns are modelled analytically or numerically and propagated to various system coupling and interface reduction routines effectively. This study aims to propagate human errors, like position measurement errors and orientation measurement errors, and random noise-based errors in the measured Frequency Response Functions(FRFs) to the interface reduction algorithm called Virtual Point Transformation(VPT) and later to a substructure coupling method called Frequency-Based Substructuring(FBS). These methods form the cornerstone for Transfer Path Analsysis (TPA). Furthermore, common sources of error like sensor mass loading effect and sensor misalignment have also been investigated. Lastly, a new method to calculate the sensor positions and orientations after a measurement has been devised based on rigid body properties of the system and from the applied force characteristics. The error propagation was performed using a computationally efficient, moment method of the first order and later validated using Monte-Carlo simulations. The results show that the orientation measurement error is the most significant followed by FRF error and position measurement error. The mass loading effect is compensated using the Structural Modification Using Response Functions (SMURF) method and the sensor misalignment is corrected using coordinate transformation. The sensor positions and orientations are accurately estimated from rigid body properties and applied force characteristics; individually using matrix algebra and simultaneously using an optimization-based non-linear least squares solver., För närvarande ser vi ett ökat intresse för felmodellering inom området modal provning och analys. Flera fel som uppstår under testserier modelleras analytiskt eller numeriskt och propageras effektivt till olika systemkopplings- och gränssnittsreduktionsrutiner. Denna studie syftar till att hantera mänskliga fel, som positionsmätningsfel och orienteringsmätfel, och slumpmässiga brusbaserade fel i de uppmätta frekvensresponsfunktionerna (FRF) till den gränssnittsreduktionsalgoritm, som kallas ”Virtual Point Transformation” (VPT), och senare till en substrukturskopplingsmetod, som kallas FBS (Frequency-Based Substructuring). Dessa metoder utgör hörnstenen för ”Transfer Path Analsysis” (TPA). Dessutom har vanliga felkällor som sensormassbelastningseffekter och felorientering av sensorer undersökts. Slutligen har en ny metod för att beräkna sensorns positioner och riktningar, efter att mätning gjorts, baserat på systemets stelkroppsegenskaper och de applicerade krafterna. Felpropageringen estimerades med en beräkningseffektiv, momentmetod av första ordningen och validerades senare med Monte-Carlo-simuleringar. Resultaten visar att orienteringsmätfelet är den mest signifikanta felkällan följt av FRF-fel och positionsmätningsfel. Massbelastningseffekten kompenseras med hjälp av ”Structural Modification Using Response Functions” (SMURF) -metoden och sensorjusteringen korrigeras med hjälp av koordinatomvandling. Sensorpositionerna och positioner och orientering beräknas exakt från stelkroppsegenskaperna och de applicerade krafterna; individuellt med matrisalgebra och samtidigt med en optimeringsbaserad icke-linjär minsta kvadratlösare.
- Published
- 2020
31. Influence of surface reflectivity on reflectorless electronic distance measurement and terrestrial laser scanning.
- Author
-
Zámečníková, Miriam, Wieser, Andreas, Woschitz, Helmut, and Ressl, Camillo
- Subjects
- *
GEODESY , *EARTH analogs , *MEASUREMENT errors , *SCIENTIFIC & technical services industry , *STRUCTURAL analysis (Engineering) - Abstract
The uncertainty of electronic distance measurement to surfaces rather than to dedicated precisionre flectors (reflectorless EDM) is afected by the entire system comprising instrument, atmosphere and surface. The impact of the latter is significant for applications like geodetic monitoring, high-precision surface modelling or laser scanner self-calibration. Nevertheless, it has not yet received sufficient attention and is not well understood. We have carried out an experimental investigation of the impact of surface reflectivity on the distance measurements of a terrestrial laser scanner. The investigation helps to clarify (i)whether variations of reflectivity cause systematic deviations of reflectorless EDM, and (ii) if so, whether it is possible and worth modelling these deviations. The results show that differences in reflectivity may actually cause systematic deviations of a few mm with diffusely re- flecting surfaces and even more with directionally reflecting ones. Using abivariate quadratic polynomial we were able to approximate these deviations as a function of measured distance and measured signal strength alone. Using this approximation to predict corrections, the deviations of the measurements could be reduced by about 70% in our experiment.We conclude that there is a systematic effect of surface reflectivity (or equivalently received signal strength) on the distance measurement and that it is possible to model and predict this effect. Integration into laser scanner calibration models may be beneficial for high precision applications. The results may apply to a broad range of instruments, not only to the specific laser scanner used herein. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
32. ERROR ANALYSIS OF MOTION CORRECTION METHOD FOR LASER SCANNING OF MOVING OBJECTS.
- Author
-
Goel, Salil and Lohani, Bharat
- Subjects
ERROR analysis in mathematics ,MOTION analysis ,OPTICAL scanners - Abstract
The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such 'motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of 'motion correction' explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of 'motion correction' method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
33. Komponentbaserad överföringsanalys och hybridsubstrukturering för höga frekvenser
- Author
-
Venugopal, Harikrishnan
- Subjects
Uncertainty Propagation ,Överföringsanalys ,Error modelling ,Transfer Path Analysis ,Mechanical Engineering ,Dynamisk substrukturering ,Dynamic Substructuring ,Osäkerhetspropagering ,Feluppskattning ,Maskinteknik - Abstract
The field of modal testing and analysis is currently facing a surge of interest in error modelling. Several errors which occur during testing campaigns are modelled analytically or numerically and propagated to various system coupling and interface reduction routines effectively. This study aims to propagate human errors, like position measurement errors and orientation measurement errors, and random noise-based errors in the measured Frequency Response Functions(FRFs) to the interface reduction algorithm called Virtual Point Transformation(VPT) and later to a substructure coupling method called Frequency-Based Substructuring(FBS). These methods form the cornerstone for Transfer Path Analsysis (TPA). Furthermore, common sources of error like sensor mass loading effect and sensor misalignment have also been investigated. Lastly, a new method to calculate the sensor positions and orientations after a measurement has been devised based on rigid body properties of the system and from the applied force characteristics. The error propagation was performed using a computationally efficient, moment method of the first order and later validated using Monte-Carlo simulations. The results show that the orientation measurement error is the most significant followed by FRF error and position measurement error. The mass loading effect is compensated using the Structural Modification Using Response Functions (SMURF) method and the sensor misalignment is corrected using coordinate transformation. The sensor positions and orientations are accurately estimated from rigid body properties and applied force characteristics; individually using matrix algebra and simultaneously using an optimization-based non-linear least squares solver. För närvarande ser vi ett ökat intresse för felmodellering inom området modal provning och analys. Flera fel som uppstår under testserier modelleras analytiskt eller numeriskt och propageras effektivt till olika systemkopplings- och gränssnittsreduktionsrutiner. Denna studie syftar till att hantera mänskliga fel, som positionsmätningsfel och orienteringsmätfel, och slumpmässiga brusbaserade fel i de uppmätta frekvensresponsfunktionerna (FRF) till den gränssnittsreduktionsalgoritm, som kallas ”Virtual Point Transformation” (VPT), och senare till en substrukturskopplingsmetod, som kallas FBS (Frequency-Based Substructuring). Dessa metoder utgör hörnstenen för ”Transfer Path Analsysis” (TPA). Dessutom har vanliga felkällor som sensormassbelastningseffekter och felorientering av sensorer undersökts. Slutligen har en ny metod för att beräkna sensorns positioner och riktningar, efter att mätning gjorts, baserat på systemets stelkroppsegenskaper och de applicerade krafterna. Felpropageringen estimerades med en beräkningseffektiv, momentmetod av första ordningen och validerades senare med Monte-Carlo-simuleringar. Resultaten visar att orienteringsmätfelet är den mest signifikanta felkällan följt av FRF-fel och positionsmätningsfel. Massbelastningseffekten kompenseras med hjälp av ”Structural Modification Using Response Functions” (SMURF) -metoden och sensorjusteringen korrigeras med hjälp av koordinatomvandling. Sensorpositionerna och positioner och orientering beräknas exakt från stelkroppsegenskaperna och de applicerade krafterna; individuellt med matrisalgebra och samtidigt med en optimeringsbaserad icke-linjär minsta kvadratlösare.
- Published
- 2020
34. Range camera self-calibration with scattering compensation
- Author
-
Lichti, Derek D., Qi, Xiaojuan, and Ahmed, Tanvir
- Subjects
- *
CAMERA calibration , *TIME-of-flight mass spectrometry , *LIGHT scattering , *DATA quality , *THREE-dimensional imaging , *RANGEFINDERS (Photography) , *IMAGE quality analysis , *PARAMETER estimation - Abstract
Abstract: Time-of-flight range camera data are prone to the scattering bias caused by multiple internal reflections of light received from a highly reflective object in the camera’s foreground that induce a phase shift in the light received from background targets. The corresponding range bias can have serious implications on the quality of data of captured scenes as well as the geometric self-calibration of range cameras. In order to minimise the impact of the scattering range biases, the calibration must be performed over a planar target field rather than a more desirable 3D target field. This significantly impacts the quality of the rangefinder offset parameter estimation due to its high correlation with the camera perspective centre position. In this contribution a new model to compensate for scattering-induced range errors is proposed that allows range camera self-calibration to be conducted over a 3D target field. Developed from experimental observations of scattering behaviour under specific scene conditions, it comprises four new additional parameters that are estimated in the self-calibrating bundle adjustment. The results of experiments conducted on five range camera datasets demonstrate the model’s efficacy in compensating for the scattering error without compromising model fidelity. It is further demonstrated that it actually reduces the rangefinder offset-perspective centre correlation and its use with a 3D target field is the preferred method for calibrating narrow field-of-view range cameras. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
35. Sensitivity analysis of a decision tree classification to input data errors using a general Monte Carlo error sensitivity model.
- Author
-
Huang, Zhi and Laffan, ShawnW.
- Subjects
- *
CARTOGRAPHY , *GEOLOGY , *MONTE Carlo method , *GAUSSIAN distribution , *PERTURBATION theory , *SIMULATION methods & models - Abstract
We analysed the sensitivity of a decision tree derived forest type mapping to simulated data errors in input digital elevation model (DEM), geology and remotely sensed (Landsat Thematic Mapper) variables. We used a stochastic Monte Carlo simulation model coupled with a one-at-a-time approach. The DEM error was assumed to be spatially autocorrelated with its magnitude being a percentage of the elevation value. The error of categorical geology data was assumed to be positional and limited to boundary areas. The Landsat data error was assumed to be spatially random following a Gaussian distribution. Each layer was perturbed using its error model with increasing levels of error, and the effect on the forest type mapping was assessed. The results of the three sensitivity analyses were markedly different, with the classification being most sensitive to the DEM error, than to the Landsat data errors, but with only a limited sensitivity to the geology data error used. A linear increase in error resulted in non-linear increases in effect for the DEM and Landsat errors, while it was linear for geology. As an example, a DEM error of as small as ±2% reduced the overall test accuracy by more than 2%. More importantly, the same uncertainty level has caused nearly 10% of the study area to change its initial class assignment at each perturbation, on average. A spatial assessment of the sensitivities indicates that most of the pixel changes occurred within those forest classes expected to be more sensitive to data error. In addition to characterising the effect of errors on forest type mapping using decision trees, this study has demonstrated the generality of employing Monte Carlo analysis for the sensitivity and uncertainty analysis of categorical outputs that have distinctive characteristics from that of numerical outputs. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
36. Microelectromechnical Systems Inertial Measurement Unit Error Modelling and Error Analysis for Low-cost Strapdown Inertial Navigation System.
- Author
-
Ramalingam, R., Anitha, G., and Shanmugam, J.
- Subjects
ERROR analysis in mathematics ,MATHEMATICAL models ,MICROELECTROMECHANICAL systems ,DETECTORS ,MATHEMATICAL decomposition ,WAVELETS (Mathematics) - Abstract
This paper presents error modelling and error analysis of microelectromechnical systems (MEMS) inertial measurement unit (IMU) for a low-cost strapdown inertial navigation system (INS). The INS consists of IMU and navigation processor. The IMU provides acceleration and angular rate of the vehicle in all the three axes. In this paper, errors that affect the MEMS IMU, which is of low cost and less volume, are stochastically modelled and analysed using Allan variance. Wavelet decomposition has been introduced to remove the high frequency noise that affects the sensors to obtain the original values of angular rates and accelerations with less noise. This increases the accuracy of the strapdown INS. The results show the effect of errors in the output of sensors, easy interpretation of random errors by Allan variance, the increase in the accuracy when wavelet decomposition is used for denoising inertial sensor raw data. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
37. Propagating reliable estimates of hydrological forecast uncertainty to many lead times.
- Author
-
Bennett, James C., Robertson, David E., Wang, Quan J., Li, Ming, and Perraud, Jean-Michel
- Subjects
- *
LEAD time (Supply chain management) , *HYDROLOGICAL forecasting , *MOVING average process , *AUTOREGRESSION (Statistics) , *GAUSSIAN mixture models , *AUTOREGRESSIVE models - Abstract
• A new error model to propagate uncertainty for ensemble streamflow forecasts. • Suitable for ephemeral and perennial rivers. • We compare it to an existing error model. • The new model produces more reliable ensembles to long (>150 h) lead times. • The new model better propagates uncertainty downstream for nested gauges. We propose a revised version of the ERRIS (error reduction and representation in stages) error model capable of reliably propagating hydrological uncertainty to long lead times (>150 time steps). ERRIS employs four stages: a transformation to handle heteroscedasticity, a moving average bias-correction, an autoregressive model and two mixture Gaussian distributions. To propagate uncertainty through multiple lead times, ERRIS makes use of a technique termed 'stochastic updating'. Ensemble spread at long lead times is partly controlled by the interplay of the autoregression coefficient ρ and the width of the error distribution. When ρ approaches 1 and the error distribution is wide, this causes over-wide ensemble distributions at long lead times. We control this interplay with the moving average bias-correction, which reduces the value of ρ and the width of the error distribution, resulting in reliable ensembles at long lead times. An additional control on the width of the ensemble at longer lead times is a restriction applied to the autoregressive model. This restriction guards against large overcorrections, which can lead to very poor forecasts. Applying the restriction when parameters are inferred can result in over-wide residual distributions. We propose the simple expedient of applying the restriction only when forecasts are generated, not when parameters are inferred. We show through a comparison with an earlier version of ERRIS that this produces more reliable ensemble distributions at long lead times, whilst still guarding against overcorrection. The resulting error model is simple and computationally efficient, and thus suitable for deployment in operational streamflow forecasting systems. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
38. The influence of elevation error on the morphometrics of channel networks extracted from DEMs and the implications for hydrological modelling.
- Author
-
Lindsay, John B. and Evans, Martin G.
- Subjects
STREAM measurements ,STOCHASTIC analysis ,HYDROLOGIC models ,WATERSHEDS ,WATERSHED hydrology ,STREAMFLOW ,SIMULATION methods & models ,GEOMORPHOLOGICAL mapping ,HYDROGRAPHY - Abstract
The article evaluates the effect of elevation error on the reliability of estimates of several stream network morphometrics, including stream order, the bifurcation, length, area and slope ratios, stream magnitude, network diameter, the flood magnitude and timing parameters of the geomorphological instantaneous unit hydrograph (GIUH) and the network width functions. Using the digital elevation models (DEMs) of three basins, stochastic simulations showed that a moderate magnitude of elevation error can result in significant uncertainty in the estimates of network morphometrics.
- Published
- 2008
- Full Text
- View/download PDF
39. Causes and consequences of error in digital elevation models.
- Author
-
Fisher, Peter F. and Tate, Nicholas J.
- Subjects
- *
GEOGRAPHIC mathematics , *SURFACE of the earth , *UNCERTAINTY (Information theory) , *ERRORS , *INFORMATION resources management , *INTERPOLATION , *RESEARCH , *TECHNOLOGY - Abstract
All digital data contain error and many are uncertain. Digital models of elevation surfaces consist of files containing large numbers of measurements representing the height of the surface of the earth, and therefore a proportion of those measurements are very likely to be subject to some level of error and uncertainty. The collection and handling of such data and their associated uncertainties has been a subject of considerable research, which has focused largely upon the description of the effects of interpolation and resolution uncertainties, as well as modelling the occurrence of errors. However, digital models of elevation derived from new technologies employing active methods of laser and radar ranging are becoming more widespread, and past research will need to be re-evaluated in the near future to accommodate such new data products. In this paper we review the source and nature of errors in digital models of elevation, and in the derivatives of such models. We examine the correction of errors and assessment of fitness for use, and finally we identify some priorities for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
40. Modelling for registration of remotely sensed imagery when reference control points contain error.
- Author
-
Ge, Yong, Leung, Yee, Ma, Jianghong, and Wang, Jinfeng
- Abstract
Reference control points (RCPs) used in establishing the regression model in the registration or geometric correction of remote sensing images are generally assumed to be “perfect”. That is, the RCPs, as explanatory variables in the regression equation, are accurate and the coordinates of their locations have no errors. Thus ordinary least squares (OLS) estimator has been applied extensively to the registration or geometric correction of remotely sensed data. However, this assumption is often invalid in practice because RCPs always contain errors. Moreover, the errors are actually one of the main sources which lower the accuracy of geometric correction of an uncorrected image. Under this situation, the OLS estimator is biased. It cannot handle explanatory variables with errors and cannot propagate appropriately errors from the RCPs to the corrected image. Therefore, it is essential to develop new feasible methods to overcome such a problem. This paper introduces a consistent adjusted least squares (CALS) estimator and proposes relaxed consistent adjusted least squares (RCALS) estimator, with the latter being more general and flexible, for geometric correction or registration. These estimators have good capability in correcting errors contained in the RCPs, and in propagating appropriately errors of the RCPs to the corrected image with and without prior information. The objective of the CALS and proposed RCALS estimators is to improve the accuracy of measurement value by weakening the measurement errors. The conceptual arguments are substantiated by a real remotely sensed data. Compared to the OLS estimator, the CALS and RCALS estimators give a superior overall performance in estimating the regression coefficients and variance of measurement errors. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
41. Uncovering the statistical and spatial characteristics of fine toposcale DEM error.
- Author
-
Oksanen, J. and Sarjakoski, T.
- Subjects
- *
OPTICAL radar , *GEOGRAPHIC information systems , *GEOLOGICAL statistics , *OPTICAL rotatory dispersion , *GEOMANCY , *AUTOCORRELATION (Statistics) , *DISPERSION (Chemistry) , *LASER communication systems , *OPTICAL communications , *RADAR - Abstract
The aim of our study was to characterize statistical and spatial details of the errors in a fine toposcale DEM derived by contour data. The fine toposcale DEMs are typically represented in a 5–50 m grid and used in the application scale 1:10 000–1:50 000. The errors were determined by using high‐quality reference data covering the entire study area from an airborne laser scanner. The work was motivated because of the essential role played by the correct characterization of DEM error in error‐propagation studies. The results showed that the spatial autocorrelation of the fine toposcale DEM error was the result of a complex combination of random and systematic‐like components, and its appropriate modelling by geostatistical methods is problematic because of the small extent of the areas in which the assumption of stationarity is valid. In addition, describing the shape of the DEM error distribution was impossible with a single parameter of dispersion. This was due to a large number of outliers, which suggests that more robust descriptors of the error should be used in addition to conventional error statistics. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
42. Error propagation of DEM-based surface derivatives
- Author
-
Oksanen, Juha and Sarjakoski, Tapani
- Subjects
- *
SPATIAL systems , *HYDRAULIC engineering , *GEOLOGICAL statistics , *STATISTICAL correlation - Abstract
Abstract: This paper presents research showing how random errors in a fine toposcale digital elevation model (DEM) are propagated to DEM-based surface derivatives. The focus was on two constrained derivatives, slope and aspect, and one unconstrained derivative, drainage basin delineation. The error propagation was explored by using numerical and analytical methods, and in both approaches the DEM error was modelled as a second-order stationary Gaussian random process. The results were summarised in the case study, in which 32 realistic scenarios of DEM error models were used. The scenarios were based on exponential and Gaussian spatial autocorrelation models with four sills (0.0625, 0.25, 1.00, and 4.00m2) and practical ranges (0, 30, 60, and 120m). We found that, as expected, increase in DEM error increased the error in surface derivatives. However, contrary to expectations, the spatial autocorrelation model appears to have varying effects on the error propagation analysis depending on the application. In constrained surface derivatives, such as slope and aspect, the maximum error in results appeared to exist when the practical range of the error''s spatial autocorrelation was roughly equal to the size of the surface derivative''s calculation window. In unconstrained terrain analysis, such as drainage basin delineation, the variance of the results appeared to increase while the spatial autocorrelation range increases. Until now, the use of spatially uncorrelated DEM error models have been considered as a ‘worst-case scenario’, but this opinion may now be challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. In addition, the study revealed that the role of the appropriate shape of the spatial autocorrelation model, either exponential or Gaussian, was not as important as the choice of appropriate autocorrelation parameters: practical range and sill. However, the shape of the spatial autocorrelation model appeared to have more influence on the calculation of slope and aspect than on the drainage basin delineation. For error propagation analysis purposes an analytical approach appears to be more useful for constrained derivatives, while the Monte Carlo method is appropriate for analysing both constrained and unconstrained derivatives. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
43. Error Analysis of a Serial–Parallel Type Machine Tool.
- Author
-
Zhoa, J.-W., Fan, K.-C., Chang, T.-H., and Li, Z.
- Published
- 2002
- Full Text
- View/download PDF
44. On-line Smoothing and Error Modelling for Integration of GNSS and Visual Odometry
- Author
-
Thanh Trung Duong, Kai-Wei Chiang, and Dinh Thuan Le
- Subjects
0209 industrial biotechnology ,INS ,error modelling ,Computer science ,integration ,02 engineering and technology ,01 natural sciences ,Biochemistry ,Article ,Analytical Chemistry ,Extended Kalman filter ,020901 industrial engineering & automation ,visual odometry ,Computer vision ,Electrical and Electronic Engineering ,Visual odometry ,navigation ,on-line smoothing ,Instrumentation ,Inertial navigation system ,GNSS ,business.industry ,010401 analytical chemistry ,Navigation system ,Sensor fusion ,Atomic and Molecular Physics, and Optics ,0104 chemical sciences ,GNSS applications ,Artificial intelligence ,business ,Smoothing - Abstract
Global navigation satellite systems (GNSSs) are commonly used for navigation and mapping applications. However, in GNSS-hostile environments, where the GNSS signal is noisy or blocked, the navigation information provided by a GNSS is inaccurate or unavailable. To overcome these issues, this study proposed a real-time visual odometry (VO)/GNSS integrated navigation system. An on-line smoothing method based on the extended Kalman filter (EKF) and the Rauch-Tung-Striebel (RTS) smoother was proposed. VO error modelling was also proposed to estimate the VO error and compensate the incoming measurements. Field tests were performed in various GNSS-hostile environments, including under a tree canopy and an urban area. An analysis of the test results indicates that with the EKF used for data fusion, the root-mean-square error (RMSE) of the three-dimensional position is about 80 times lower than that of the VO-only solution. The on-line smoothing and error modelling made the results more accurate, allowing seamless on-line navigation information. The efficiency of the proposed methods in terms of cost and accuracy compared to the conventional inertial navigation system (INS)/GNSS integrated system was demonstrated.
- Published
- 2019
- Full Text
- View/download PDF
45. Relations between task and activity: elements for elaborating a framework for error analysis.
- Author
-
LEPLAT, J.
- Abstract
The notion of error, when applied to an activity or the result of an activity, implies the notion of task: it expresses the deviation between the activity and the task being considered from an angle which is judged to be relevant. The task and the activity are the object of representations for the analyst (or specialist) and for the driver. Four representations are dealt with in this paper: the task and the activity for the specialist and the task and the activity for the driver. An interpretation is proposed for these tasks, and they are illustrated using some of the work already carried out in this field. The signification of deviations between these representations is then discussed, together with the advantage of studying these deviations in order to clarify error-producing mechanisms. Analysis in terms of task and activity raises methodological and practical problems which are touched upon; it does not exclude referring to psychological theoretical frameworks to which it is worthwhile linking it. This perspective raises questions which make it possible to enhance the study of errors: it could be completed at a later date by extending it to include other representation categories. [ABSTRACT FROM PUBLISHER]
- Published
- 1990
- Full Text
- View/download PDF
46. A Combined Gravity Compensation Method for INS Using the Simplified Gravity Model and Gravity Database
- Author
-
Xiao Zhou, Gongliu Yang, Jing Wang, and Zeyang Wen
- Subjects
Gravity (chemistry) ,error modelling ,Computer science ,Gravity compensation ,Terrain ,02 engineering and technology ,Accelerometer ,computer.software_genre ,01 natural sciences ,Biochemistry ,Article ,Analytical Chemistry ,Computer Science::Robotics ,High Energy Physics::Theory ,General Relativity and Quantum Cosmology ,0203 mechanical engineering ,Inertial measurement unit ,Electrical and Electronic Engineering ,Instrumentation ,Inertial navigation system ,gravity model ,020301 aerospace & aeronautics ,Database ,010401 analytical chemistry ,high precision free-INS ,Atomic and Molecular Physics, and Optics ,0104 chemical sciences ,extreme learning machine (ELM) ,Gravity model of trade ,Trajectory ,gravity compensation ,computer - Abstract
In recent decades, gravity compensation has become an important way to reduce the position error of an inertial navigation system (INS), especially for a high-precision INS, because of the extensive application of high precision inertial sensors (accelerometers and gyros). This paper first deducts the INS’s solution error considering gravity disturbance and simulates the results. Meanwhile, this paper proposes a combined gravity compensation method using a simplified gravity model and gravity database. This new combined method consists of two steps all together. Step 1 subtracts the normal gravity using a simplified gravity model. Step 2 first obtains the gravity disturbance on the trajectory of the carrier with the help of ELM training based on the measured gravity data (provided by Institute of Geodesy and Geophysics; Chinese Academy of sciences), and then compensates it into the error equations of the INS, considering the gravity disturbance, to further improve the navigation accuracy. The effectiveness and feasibility of this new gravity compensation method for the INS are verified through vehicle tests in two different regions; one is in flat terrain with mild gravity variation and the other is in complex terrain with fierce gravity variation. During 2 h vehicle tests, the positioning accuracy of two tests can improve by 20% and 38% respectively, after the gravity is compensated by the proposed method.
- Published
- 2018
- Full Text
- View/download PDF
47. Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation
- Author
-
Martti Kirkko-Jaakkola, Laura Ruotsalainen, Maija Makela, Jesperi Rantanen, National Land Survey of Finland, and Maanmittauslaitos
- Subjects
error modelling ,Computer science ,02 engineering and technology ,Simultaneous localization and mapping ,lcsh:Chemical technology ,01 natural sciences ,Biochemistry ,Sonar ,Article ,Analytical Chemistry ,sensor fusion ,indoor positioning ,particle filtering ,symbols.namesake ,Inertial measurement unit ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:TP1-1185 ,Electrical and Electronic Engineering ,Instrumentation ,Observational error ,010401 analytical chemistry ,020206 networking & telecommunications ,Statistical model ,Kalman filter ,Sensor fusion ,Atomic and Molecular Physics, and Optics ,0104 chemical sciences ,Gaussian noise ,symbols ,Particle filter ,Algorithm - Abstract
The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM) and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU), sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS) sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF), which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf) in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is tested via two experiments, one at a university’s premises and another in realistic tactical conditions. The results show significant improvement on the horizontal localization when the measurement errors are carefully modelled and their inclusion into the particle filtering implementation correctly realized.
- Published
- 2018
48. Confidence and reliability measures in speaker verification
- Author
-
Richiardi, Jonas, Drygajlo, Andrzej, and Prodanov, Plamen
- Subjects
- *
BIOMETRY , *IDENTIFICATION , *AUTOMATIC speech recognition , *DATABASES - Abstract
Abstract: Speaker verification is a biometric identity verification technique whose performance can be severely degraded by the presence of noise. Using a coherent notation, we reformulate and review several methods which have been proposed to quantify the uncertainty in verification results, some with a view to coping with the effects of mismatched training-testing environments. We also include a recently proposed method, which is firmly rooted in a probabilistic approach and interpretation, and explicitly measures signal quality before assigning a reliability value to the speaker verification classifier''s decision. We evaluate the performance of the confidence and reliability measures over a noisy 251-users database, showing that taking into account signal-domain quality can lead to better accuracy in prediction of classifier errors. We discuss possible strategies for using the measures in a speaker verification system, balancing acquisition duration and verification error rate. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
49. Paving the way for future use of the Urban Trench model along with a lane level road map
- Author
-
David Betaille, Cadic, Ifsttar, Géolocalisation (IFSTTAR/COSYS/GEOLOC), and Institut Français des Sciences et Technologies des Transports, de l'Aménagement et des Réseaux (IFSTTAR)-PRES Université Nantes Angers Le Mans (UNAM)
- Subjects
010504 meteorology & atmospheric sciences ,[SPI] Engineering Sciences [physics] ,DEPLACEMENT URBAIN ,INTELLIGENT TRANSPORTATION SYSTEMS ,Solid modeling ,01 natural sciences ,Civil engineering ,USER WITH A MULTIMODAL ,SYSTEME DE TRANSPORT INTELLIGENT ,Transport engineering ,[SPI]Engineering Sciences [physics] ,ON A PRECISE AND ,0502 economics and business ,11. Sustainability ,Road map ,MAY ALSO BE A ,Intelligent transportation system ,0105 earth and related environmental sciences ,RELIABLE ,050210 logistics & transportation ,MATTER OF PROVIDING A ,TRAVEL ASSISTANCE SERVICE BASED ,05 social sciences ,Geography ,GNSS applications ,GNSS POSITIONING ,Location-based service ,Trench ,GEOLOCALISATION ET NAVIGATION PAR UN SYSTEME DE SATELLITES - GNSS ,Key (cryptography) ,A priori and a posteriori ,ERROR MODELLING ,NON-LINE-OF-SIGHT ,3D CITY MODEL - Abstract
European Navigation Conference, LAUSANNE, SUISSE, 09-/05/2017 - 12/05/2017; Positioning is one of the key functional components of intelligent transportation systems (ITS) with communication and computing. Road transport and Location Based Services will make use of GNSS for positioning purpose and both road and LBS are the main application domains of GNSS. This article focuses on urban applications. The rover is moving in streets where buildings around make urban trenches. The so called Urban Trench model is used to correct non-line-of- sight satellites. The principle is first presented. Then several modalities of applying this principle are addressed, in particular the a priori road lane map-matching of the rover position estimate. Last, experimental data are processed with this principle and its various applications, results are shown and analysed, so that first conclusions are drawn and research perspectives designed.
- Published
- 2017
50. On-line Smoothing and Error Modelling for Integration of GNSS and Visual Odometry.
- Author
-
Duong, Thanh Trung, Chiang, Kai-Wei, and Le, Dinh Thuan
- Subjects
- *
GLOBAL Positioning System , *INERTIAL navigation systems , *KALMAN filtering , *MULTISENSOR data fusion , *VISUAL odometry - Abstract
Global navigation satellite systems (GNSSs) are commonly used for navigation and mapping applications. However, in GNSS-hostile environments, where the GNSS signal is noisy or blocked, the navigation information provided by a GNSS is inaccurate or unavailable. To overcome these issues, this study proposed a real-time visual odometry (VO)/GNSS integrated navigation system. An on-line smoothing method based on the extended Kalman filter (EKF) and the Rauch-Tung-Striebel (RTS) smoother was proposed. VO error modelling was also proposed to estimate the VO error and compensate the incoming measurements. Field tests were performed in various GNSS-hostile environments, including under a tree canopy and an urban area. An analysis of the test results indicates that with the EKF used for data fusion, the root-mean-square error (RMSE) of the three-dimensional position is about 80 times lower than that of the VO-only solution. The on-line smoothing and error modelling made the results more accurate, allowing seamless on-line navigation information. The efficiency of the proposed methods in terms of cost and accuracy compared to the conventional inertial navigation system (INS)/GNSS integrated system was demonstrated. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.