230 results on '"errors-in-variables model"'
Search Results
2. Autocorrelated unreplicated linear functional relationship model for multivariate time series data.
- Author
-
Chang, Yun Fah, Looi, Sing Yan, Pan, Wei Yeing, and Sim, Shin Zhu
- Subjects
- *
ERRORS-in-variables models , *TIME series analysis , *DEPENDENT variables , *MEASUREMENT errors , *BOX-Jenkins forecasting , *TIME measurements - Abstract
The conventional practices in handling cross-sectional data treated the explanatory variables as fixed variables without measurement errors. This article proposed a novel autocorrelated unreplicated linear functional relationship model (AULFR) model to accommodate the autocorrelated errors in the measurement errors model. Some basic properties of the model have been derived. A modified backshift operator is used in transforming the autocorrelated error into uncorrelated error. Simulation studies show that AULFR outperforms other benchmarking models even with a relatively small training data percentage or a sample size of 100 training observations. The application of the AULFR model on an actual economic case shows consistent results with simulation studies. The advantages of the proposed model are (i) it models the relationship between a time-based dependent variable and a set of time-based explanatory variables which subjected to measurement errors, and (ii) it can predict the current or future values of both dependent and explanatory variables using historical data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. A Proportionate Maximum Total Complex Correntropy Algorithm for Sparse Systems.
- Author
-
Huang, Sifan, Liu, Junzhu, Qian, Guobing, and Wang, Xin
- Subjects
- *
ERRORS-in-variables models , *SPARSE matrices , *ADAPTIVE filters , *RANDOM noise theory , *SYSTEM identification - Abstract
While the practical application of adaptive filters has indeed garnered substantial attention, two pressing issues persist that have a profound impact on their performance—system sparsity and the presence of contaminated Gaussian impulsive noise. In this research paper, we propose a novel approach to tackle both of these issues simultaneously by introducing the concept of a proportionate matrix. Specifically, we present a proportionate maximum total complex correntropy algorithm based on the errors-in-variables model. The paper presents a theoretical analysis of the steady-state weight error power under the influence of impulsive noise. Furthermore, it discusses the performance comparison in system identification and highlights the robustness of the proposed algorithm. To validate its effectiveness, a simulation involving stereophonic acoustic echo cancellation is conducted, and the results confirm the clear advantages of the proposed Proportionate Maximum Total Complex Correntropy algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Effects of errors-in-variables on the internal and external reliability measures
- Author
-
Yanxiong Liu, Yun Shi, Peiliang Xu, Wenxian Zeng, and Jingnan Liu
- Subjects
Weighted least squares ,Errors-in-variables model ,Nonlinear adjustment ,Total least squares ,Reliability theory ,Geodesy ,QB275-343 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
The reliability theory has been an important element of the classical geodetic adjustment theory and methods in the linear Gauss-Markov model. Although errors-in-variables (EIV) models have been intensively investigated, little has been done about reliability theory for EIV models. This paper first investigates the effect of a random coefficient matrix A on the conventional geodetic reliability measures as if the coefficient matrix were deterministic. The effects of such geodetic internal and external reliability measures due to the randomness of the coefficient matrix are worked out, which are shown to depend not only on the noise level of the random elements of A but also on the values of parameters. An alternative, linear approximate reliability theory is accordingly developed for use in EIV models. Both the EIV-affected reliability measures and the corresponding linear approximate measures fully account for the random errors of both the coefficient matrix and the observations, though formulated in a slightly different way. Numerical experiments have been carried to demonstrate the effects of errors-in-variables on reliability measures and compared with the conventional Baarda's reliability measures. The simulations have confirmed our theoretical results that the EIV-reliability measures depend on both the noise level of A and the parameter values. The larger the noise level of A, the larger the EIV-affected internal and external reliability measures; the larger the parameters, the larger the EIV-affected internal and external reliability measures.
- Published
- 2024
- Full Text
- View/download PDF
5. Toward a unified approach to the total least-squares adjustment.
- Author
-
Hu, Yu, Fang, Xing, and Zeng, Wenxian
- Abstract
In this paper, we analyze the general errors-in-variables (EIV) model, allowing both the uncertain coefficient matrix and the dispersion matrix to be rank-deficient. We derive the weighted total least-squares (WTLS) solution in the general case and find that with the model consistency condition: (1) If the coefficient matrix is of full column rank, the parameter vector and the residual vector can be uniquely determined independently of the singularity of the dispersion matrix, which naturally extends the Neitzel/Schaffrin rank condition (NSC) in previous work. (2) In the rank-deficient case, the estimable functions and the residual vector can be uniquely determined. As a result, a unified approach for WTLS is provided by using generalized inverse matrices (g-inverses) as a principal tool. This method is unified because it fully considers the generality of the model setup, such as singularity of the dispersion matrix and multicollinearity of the coefficient matrix. It is flexible because it does not require to distinguish different cases before the adjustment. We analyze two examples, including the adjustment of the translation elimination model, where the centralized coordinates for the symmetric transformation are applied, and the unified adjustment, where the higher-dimensional transformation model is explicitly compatible with the lower-dimensional transformation problem. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Minimal clinically important change of knee flexion in people with knee osteoarthritis after non-surgical interventions using a meta-analytical approach
- Author
-
M. Denika C. Silva, Andrew P. Woodward, Angela M. Fearon, Diana M. Perriman, Trevor J. Spencer, Jacqui M. Couldrick, and Jennie M. Scarvell
- Subjects
Knee osteoarthritis ,Knee flexion ,Minimal clinically important change ,Minimal clinically important difference ,Non-surgical interventions ,Errors-in-variables model ,Medicine - Abstract
Abstract Background Minimal clinically important change (MCIC) represents the minimum patient-perceived improvement in an outcome after treatment, in an individual or within a group over time. This study aimed to determine MCIC of knee flexion in people with knee OA after non-surgical interventions using a meta-analytical approach. Methods Four databases (MEDLINE, Cochrane, Web of Science and CINAHL) were searched for studies of randomised clinical trials of non-surgical interventions with intervention duration of ≤ 3 months that reported change in (Δ) (mean change between baseline and immediately after the intervention) knee flexion with Δ pain or Δ function measured using tools that have established MCIC values. The risk of bias in the included studies was assessed using version 2 of the Cochrane risk-of-bias tool for randomised trials (RoB 2). Bayesian meta-analytic models were used to determine relationships between Δ flexion with Δ pain and Δ function after non-surgical interventions and MCIC of knee flexion. Results Seventy-two studies (k = 72, n = 5174) were eligible. Meta-analyses included 140 intervention arms (k = 61, n = 4516) that reported Δ flexion with Δ pain using the visual analog scale (pain-VAS) and Δ function using the Western Ontario and McMaster Universities Osteoarthritis Index function subscale (function-WOMAC). Linear relationships between Δ pain at rest-VAS (0–100 mm) with Δ flexion were − 0.29 (− 0.44; − 0.15) (β: posterior median (CrI: credible interval)). Relationships between Δ pain during activity VAS and Δ flexion were − 0.29 (− 0.41, − 0.18), and Δ pain-general VAS and Δ flexion were − 0.33 (− 0.42, − 0.23). The relationship between Δ function-WOMAC (out of 100) and Δ flexion was − 0.15 (− 0.25, − 0.07). Increased Δ flexion was associated with decreased Δ pain-VAS and increased Δ function-WOMAC. The point estimates for MCIC of knee flexion ranged from 3.8 to 6.4°. Conclusions The estimated knee flexion MCIC values from this study are the first to be reported using a novel meta-analytical method. The novel meta-analytical method may be useful to estimate MCIC for other measures where anchor questions are problematic. Systematic review registration PROSPERO CRD42022323927.
- Published
- 2024
- Full Text
- View/download PDF
7. Robust Total Least Mean M-Estimate Normalized Subband Filter Adaptive Algorithm Under EIV Model in Impulsive Noise
- Author
-
Zhao, Haiquan, Cao, Zian, and Chen, Yida
- Published
- 2024
- Full Text
- View/download PDF
8. Minimal clinically important change of knee flexion in people with knee osteoarthritis after non-surgical interventions using a meta-analytical approach
- Author
-
Silva, M. Denika C., Woodward, Andrew P., Fearon, Angela M., Perriman, Diana M., Spencer, Trevor J., Couldrick, Jacqui M., and Scarvell, Jennie M.
- Published
- 2024
- Full Text
- View/download PDF
9. Sparse estimation in high-dimensional linear errors-in-variables regression via a covariate relaxation method.
- Author
-
Li, Xin and Wu, Dongya
- Abstract
Sparse signal recovery in high-dimensional settings via regularization techniques has been developed in the past two decades and produces fruitful results in various areas. Previous studies mainly focus on the idealized assumption where covariates are free of noise. However, in realistic scenarios, covariates are always corrupted by measurement errors, which may induce significant estimation bias when methods for clean data are naively applied. Recent studies begin to deal with the errors-in-variables models. Current method either depends on the distribution of covariate noise or does not depends on the distribution but is inconsistent in parameter estimation. A novel covariate relaxation method that does not depend on the distribution of covariate noise is proposed. Statistical consistency on parameter estimation is established. Numerical experiments are conducted and show that the covariate relaxation method achieves the same or even better estimation accuracy than that of the state of art nonconvex Lasso estimator. The advantage that the covariate relaxation method is independent of the distribution of covariate noise while produces a small estimation error suggests its prospect in practical applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. 基于加权总体最小平方中值法的直线拟合.
- Author
-
徐峥
- Abstract
Copyright of Railway Construction Technology is the property of Railway Construction Technology Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
11. Spatial Linear Regression with Covariate Measurement Errors: Inference and Scalable Computation in a Functional Modeling Approach.
- Author
-
Cao, Jiahao, He, Shiyuan, and Zhang, Bohai
- Subjects
- *
MEASUREMENT errors , *ERRORS-in-variables models , *ASYMPTOTIC distribution , *SEA ice , *STRUCTURAL models , *SAMPLE size (Statistics) - Abstract
For spatial linear models, the classical maximum-likelihood estimators of both regression coefficients and variance components can be biased when the covariates are measured with errors. This work introduces a theoretically backed-up estimation framework for the spatial linear errors-in-variables model in a functional approach. Compared with the structural models, the functional approach treats the unobserved true covariates as fixed unknown parameters without imposing additional structures, thus leading to more robust parameter inference. Our model parameters are estimated simultaneously based on a set of unbiased estimating equations. Under some regularity conditions, we prove the consistency of the proposed estimating-equation estimators and derive their asymptotic distribution. In addition, a consistent variance estimator is developed for the estimating-equation estimators. To handle large spatial datasets, we provide two approaches to obtain scalable estimations based on our proposed estimating equations, where the required computational time and storage are reduced to be linear with sample size for each estimating-function evaluation. Simulation studies under different settings show that our estimators are consistent and the scalable algorithms work well. Finally, the proposed method is applied to studying the relationship between Arctic sea ice and related geophysical variables. for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Complete q-th moment convergence for the maximum of partial sums of m-negatively associated random variables and its application to the EV regression model*.
- Author
-
Jiang, Fen, Wang, Miaomiao, and Wang, Xuejun
- Subjects
- *
ERRORS-in-variables models , *RANDOM variables , *LEAST squares - Abstract
In this article, we prove the complete q-th moment convergence for the maximum of partial sums of m -negatively associated random variables { X n , n ≥ 1 } under some general conditions. The results obtained in this article are extensions of previous studies for m -negatively associated random variables. In addition, we investigate the strong consistency of the least squares estimator in the simple linear errors-in-variables model based on m -negatively associated random variables, and provide some simulations to assess the finite sample performance of the theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Complete q-th moment convergence for the maximum of partial sums of m-negatively associated random variables and its application to the EV regression model*.
- Author
-
Jiang, Fen, Wang, Miaomiao, and Wang, Xuejun
- Subjects
ERRORS-in-variables models ,RANDOM variables ,LEAST squares - Abstract
In this article, we prove the complete q-th moment convergence for the maximum of partial sums of m -negatively associated random variables { X n , n ≥ 1 } under some general conditions. The results obtained in this article are extensions of previous studies for m -negatively associated random variables. In addition, we investigate the strong consistency of the least squares estimator in the simple linear errors-in-variables model based on m -negatively associated random variables, and provide some simulations to assess the finite sample performance of the theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Quantity properties of variate and coefficient in errors-in-variables model under Gaussian noise.
- Author
-
Cui, Xunxue, Qiu, Guoxin, and Yu, Kegen
- Subjects
- *
ERRORS-in-variables models , *RANDOM noise theory , *PARAMETER estimation , *TIME management - Abstract
• More variates or coefficients are better for EIV/TLS problem. • Quantity properties confirmed by analytical derivation of CRB tool. • Additional variates and coefficients are encouraged for practical EIV system. • Simulated by CRB and HBB bound about TDOA-based source bearing estimation. Total least-squares (TLS) aims to estimate the unknown parameters of an errors-in-variables (EIV) model from noisy observations when the coefficients are also perturbed by errors. It is helpful to know whether more variates and coefficients lead to more accurate estimation for TLS. In this study, our motivation was to reveal the relationship between the estimation performance and the number of variates or coefficients. The challenge was how to observe a performance change when using additional variates/coefficients. First, the Cramér–Rao bound (CRB) and the hybrid Bhattacharyya–Barankin (HBB) bound were derived for the multivariate EIV model under Gaussian noise typical to most systems. Second, we theoretically affirmed the properties by validating the analytical CRB that additional variates/coefficients were favorable. Third, the quantity properties were verified by simulation about CRB and HBB, considering the source direction estimation using time difference of arrival as an example. We conclude that the estimation performance can be improved if additional variates/coefficients are available. The multivariate TLS should be superior to the univariate one. The conclusions have significance for guiding practical system development. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Improved Robust Total Least Squares Adaptive Filter Algorithms Using Hyperbolic Secant Function.
- Author
-
Chen, Yida and Zhao, Haiquan
- Abstract
In the errors-in-variables (EIV) model, the total least squares (TLS) algorithm has shown good convergence performance. In this model, the input and output signals are simultaneously infected with noise. However, when the output signal is contaminated with impulse noise, the convergence performance of TLS will decrease or even diverge. In order to settle this problem, we propose a hyperbolic secant total least squares (HSTLS) algorithm through adopting the hyperbolic secant function as the cost function. The HSTLS algorithm shows better performance in non-Gaussian noise environment. At the same time, in order to deal with the contradiction between the algorithm’s convergence speed and steady-state deviation with the fixed step size, this brief also provides a new variable step size strategy. The improved variable step size HSTLS (VHSTLS) algorithm shows better performance in simulation. Furthermore, to reduce steady-state deviation of the HSTLS algorithm in sparse systems, a series of improved HSTLS algorithms based on the sparse criterion have also been proposed, and they also have excellent convergence performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. Tikhonov-regularized weighted total least squares formulation with applications to geodetic problems.
- Author
-
Kariminejad, M. M., Sharifi, M. A., and Amiri-Simkooei, A. R.
- Subjects
- *
LEAST squares , *ERRORS-in-variables models , *AFFINE transformations , *TIKHONOV regularization , *NONLINEAR equations - Abstract
This contribution presents the Tikhonov regularized weighted total least squares (TRWTLS) solution in an errors-in-variables (EIV) model. The previous attempts had solved this problem based on the hybrid approximation solution (HAPS) within a nonlinear Gauss-Helmert model. The present formulation is a generalized form of the classical nonlinear Gauss-Helmert model, having formulated in an EIV general mixed observation model. It is a follow-up to the previous work throughout the WTLS problems formulated rely on a standard least squares (SLS) theory. Two cases, namely the EIV parametric model and the classical nonlinear mixed model, could be considered special cases of the general mixed observation model. These formulations are conceptually simple; because they are formulated based on the SLS theory, and subsequently, the existing SLS knowledge can directly be applied to the ill-posed mixed EIV model. Two geodetic applications have then adopted to illustrate the developed theory. As a first case, 2D affine transformation parameters (six-parameter affine transformation) for ill-scattered data points are adeptly solved by the TRWTLS method. Second, the circle fitting problem as a nonlinear case is not only tackled for well-scattered data points but also tackled for ill-scattered data points in a nonlinear mixed model. Finally, all results indicate that the Tikhonov regularization provides a stable and reliable solution in an ill-posed WTLS problem, and hence an efficient method applicable to many engineering problems. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. Total Least Squares Normalized Subband Adaptive Filter Algorithm for Noisy Input.
- Author
-
Zhao, Haiquan, Chen, Yida, Liu, Jun, and Zhu, Yingying
- Abstract
Subband adaptive filter has been widely applied to process correlated input signals due to its decorrelation property. However, the performance of the subband adaptive filter algorithm will be drastically degraded in the case of both the input and output signals are contaminated with noise. To tackle this problem, this brief proposes a total least squares normalized subband adaptive filter (TLS-NSAF) algorithm,which is different from bias-compensated schemes. The proposed algorithm is derived by employing the Rayleigh quotient as the cost function and the gradient steepest descent method. The local mean stability and computational complexity of the proposed algorithm are also analyzed. Simulation results demonstrate that the TLS-NSAF algorithm has better performance in comparison with previous algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. Asymptotic normality and mean consistency of LS estimators in the errors-in-variables model with dependent errors
- Author
-
Zhang Yu, Liu Xinsheng, Yu Yuncai, and Hu Hongchang
- Subjects
errors-in-variables model ,negatively superadditive dependent ,asymptotic normality ,mean consistency ,strong law of large numbers ,62f12 ,60f05 ,62e20 ,62j05 ,Mathematics ,QA1-939 - Abstract
In this article, an errors-in-variables regression model in which the errors are negatively superadditive dependent (NSD) random variables is studied. First, the Marcinkiewicz-type strong law of large numbers for NSD random variables is established. Then, we use the strong law of large numbers to investigate the asymptotic normality of least square (LS) estimators for the unknown parameters. In addition, the mean consistency of LS estimators for the unknown parameters is also obtained. Some results for independent random variables and negatively associated random variables are extended and improved to the case of NSD setting. At last, two simulations are presented to verify the asymptotic normality and mean consistency of LS estimators in the model.
- Published
- 2020
- Full Text
- View/download PDF
19. 广义最大总体相关熵自适应滤波算法.
- Author
-
赵海全 and 陈奕达
- Abstract
Copyright of Journal of Signal Processing is the property of Journal of Signal Processing and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2021
- Full Text
- View/download PDF
20. A modified iterative algorithm for the weighted total least squares.
- Author
-
Naeimi, Younes and Voosoghi, Behzad
- Subjects
- *
LEAST squares , *COVARIANCE matrices , *LAGRANGE multiplier , *ALGORITHMS , *ERRORS-in-variables models , *PROGRESSIVE collapse - Abstract
In this paper first, the method used for solving the weighted total least squares is discussed in two cases; (1) The parameter corresponding to the erroneous column in the design matrix is a scalar, model (H + G) T r + δ = q + e , (2) The parameter corresponding to the erroneous column in the design matrix is a vector, model (H + G) T r + δ = q + e . Available techniques for solving TLS are based on the SVD and have a high computational burden. Besides, for the other presented methods that do not use SVD, there is need for large matrices, and it is needed to put zero in the covariance matrix of the design matrix, corresponding to errorless columns. This in turn increases the matrix size and results in increased volume of the calculations. However, in the proposed method, problem-solving is done without the need for SVD, and without introducing Lagrange multipliers, thus avoiding the error-free introducing of some columns of the design matrix by entering zero in the covariance matrix of the design matrix. It needs only easy equations based on the principles of summation, which will result in very low computing effort and high speed. Another advantage of this method is that, due to the similarity between this solving method and the ordinary least squares method, one can determine the covariance matrix of the estimated parameters by the error propagation law and use of other advantages of the ordinary least squares method. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
21. Minimax Rates of ℓp-Losses for High-Dimensional Linear Errors-in-Variables Models over ℓq-Balls
- Author
-
Xin Li and Dongya Wu
- Subjects
sparse linear regression ,errors-in-variables model ,minimax rate ,Kullback–Leibler divergence ,information-theoretic limitations ,Science ,Astrophysics ,QB460-466 ,Physics ,QC1-999 - Abstract
In this paper, the high-dimensional linear regression model is considered, where the covariates are measured with additive noise. Different from most of the other methods, which are based on the assumption that the true covariates are fully obtained, results in this paper only require that the corrupted covariate matrix is observed. Then, by the application of information theory, the minimax rates of convergence for estimation are investigated in terms of the ℓp(1≤p<∞)-losses under the general sparsity assumption on the underlying regression parameter and some regularity conditions on the observed covariate matrix. The established lower and upper bounds on minimax risks agree up to constant factors when p=2, which together provide the information-theoretic limits of estimating a sparse vector in the high-dimensional linear errors-in-variables model. An estimator for the underlying parameter is also proposed and shown to be minimax optimal in the ℓ2-loss.
- Published
- 2021
- Full Text
- View/download PDF
22. Convergence rates in the weak law of large numbers for weighted sums of i.i.d. random variables and applications in errors-in-variables models.
- Author
-
Zhang, Mingyang, Chen, Pingyan, and Sung, Soo Hak
- Subjects
- *
ERRORS-in-variables models , *LAW of large numbers , *RANDOM variables , *REGRESSION analysis - Abstract
We prove necessary and sufficient conditions for the convergence rates in the weak law of large numbers for weighted sums of independent and identically distributed (i.i.d.) random variables. This result is applied to simple linear error-in-variables regression models. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. Jackknife covariance matrix estimation for observations from mixture.
- Author
-
Maiboroda, Rostyslav and Sugakova, Olena
- Subjects
COVARIANCE matrices ,POCKETKNIVES ,ERRORS-in-variables models ,MIXTURES ,SOCIOLOGICAL research - Abstract
A general jackknife estimator for the asymptotic covariance of moment estimators is considered in the case when the sample is taken from a mixture with varying concentrations of components. Consistency of the estimator is demonstrated. A fast algorithm for its calculation is described. The estimator is applied to construction of confidence sets for regression parameters in the linear regression with errors in variables. An application to sociological data analysis is considered. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
24. Weighted total least-squares joint adjustment with weight correction factors.
- Author
-
Wang, Leyang and Yu, Hang
- Subjects
- *
CORRECTION factors , *RANDOM effects model , *AFFINE transformations , *ERRORS-in-variables models - Abstract
A joint adjustment involves integrating different types of geodetic datasets, or multiple datasets of the same data type, into a single adjustment. This paper applies the weighted total least-squares (WTLS) principle to joint adjustment problems and proposes an iterative algorithm for WTLS joint (WTLS-J) adjustment with weight correction factors. Weight correction factors are used to rescale the weight matrix of each dataset while using the Helmert variance component estimation (VCE) method to estimate the variance components, since the variance components in the stochastic model are unknown. An affine transformation example is illustrated to verify the practical benefit and the relative computational efficiency of the proposed algorithm. It is shown that the proposed algorithm obtains the same parameter estimates as the Amiri-Simkooei algorithm in our example; however, the proposed algorithm has its own computational advantages, especially when the number of data points is large. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
25. 二维直角建筑物规则化的加权总体最小二乘平差方法.
- Author
-
李林林, 周拥军, and 周瑜
- Abstract
An errors-in-variable (EIV) with arbitrary constraints is proposed for the purpose of building regularization from remote sensed data, in which the edge points are treated as measurements, the constrained slopes and intercepts of each edge are chosen as parameters. Assuming the measurement vector and the design matrix are mutually correlated, the scheme of calculating the dispersion matrix of augmented matrix is suggested. A generic constrained weighted total least squares(WTLS) algorithm is derived with an approximate accuracy assessment method, and the WTLS algorithm of a quadratic constrained EIV problem is given as a specific case. Theoretic analysis and data experiment demonstrate the advantages of an EIV model compared with a Gauss-Helmert model in building regularization problem, and the rapid convergence rate of proposed WTLS algorithm. It aims to promote WTLS adjustment methods, and to expand the applications of total least squares method in new surveying technology with a certain theoretical and practical significance. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
26. On the errors-in-variables model with inequality constraints of dependent variables for geodetic transformation.
- Author
-
Zeng, W., Fang, X., Lin, Y., Huang, X., and Yao, Y.
- Subjects
- *
GEODESY , *LEAST squares , *GLOBAL Positioning System , *STOCHASTIC approximation , *COVARIANCE matrices - Abstract
The Total least-squares (TLS) adjustment with inequality constraints has received increased attention in geodesy over the last three years. In the most recent work, inequality constraints have been presented that can restrict unknown parameters and independent variables, but no one has provided an inequality-constrained adjustment for restricting dependent variables. In this work, we review the TLS adjustment methods in terms of different model formulations and then investigate the errors-in-variables model with inequality constraints for dependent variables. Finally, we demonstrate the practicality of our approach with a planar geodetic transformation, where the uncertainty of the target observations is reduced via the inequality constraints for dependent variables. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. Energy performance of buildings: A statistical approach to marry calculated demand and measured consumption.
- Author
-
Hörner, Michael and Lichtmeß, Markus
- Subjects
- *
ENERGY consumption of buildings , *ENERGY demand management , *ENERGY conservation , *DWELLINGS , *REGRESSION analysis - Abstract
In public debate, Energy Performance Certificates (EPCs) of buildings have been criticised for not reflecting the energy demand realistically. And indeed, measurement, as in energy bills, usually differs from the calculation, in particular, when simplified energy performance calculation models and standard specifications are applied, as in EPCs. Thus, energy-saving potentials of refurbishment recommendations and their cost-effectiveness tend to be over-estimated. Of course, this is not desirable. These effects were analysed in two sets of data, the Energy Performance Certificate Register for residential buildings in Luxemburg, run by the Luxemburg Ministry of the Economy (Lichtmeß, 2012) and a database gathered in the research project "Teilenergiekennwerte von Nichtwohngebäuden (TEK)" (Hörner et al., 2014a) funded by the German Federal Ministry of Economic Affairs and Energy. Multiple linear regression and error calculus were applied to study the gap between measurement and various calculation models in detail. A statistical procedure is proposed to estimate expectation value and variance of the future energy consumption of buildings in case of refurbishment, as a supplement to standard calculations in EPCs for example. Prerequisite is that for a sufficient number of buildings, data on both, measured energy consumption and calculated demand, are available. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
28. Conditional likelihood inference in a heteroscedastic functional measurement error model.
- Author
-
Galea, Manuel and de Castro, Mário
- Subjects
- *
ERRORS-in-variables models , *HETEROSCEDASTICITY , *MEASUREMENT errors , *STATISTICAL hypothesis testing , *INFERENTIAL statistics , *GAUSSIAN distribution , *CHRONIC myeloid leukemia - Abstract
In this paper, we deal with inference about the structural parameters in a heteroscedastic functional measurement error models under the normal distribution assumption. Given a minimal sufficient statistic for the incidental parameters, the conditional maximum likelihood (CML) approach is used. We show that CML estimators have explicit expressions and their sampling distribution is exact. We also show that the classical test statistics to test hypotheses of interest coincide and have exact distributions. We apply the statistical inference tools developed to a data set on comparison of measurement methods. • Inference on the structural parameters in a functional measurement error model. • Minimal sufficient statistic for the incidental parameters. • We apply the developed methodology to a data set on comparison of measurement methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Total Least-Squares Collocation: An Optimal Estimation Technique for the EIV-Model with Prior Information
- Author
-
Burkhard Schaffrin
- Subjects
Errors-In-Variables Model ,Total Least-Squares ,prior information ,collocation vs. adjustment ,Mathematics ,QA1-939 - Abstract
In regression analysis, oftentimes a linear (or linearized) Gauss-Markov Model (GMM) is used to describe the relationship between certain unknown parameters and measurements taken to learn about them. As soon as there are more than enough data collected to determine a unique solution for the parameters, an estimation technique needs to be applied such as ‘Least-Squares adjustment’, for instance, which turns out to be optimal under a wide range of criteria. In this context, the matrix connecting the parameters with the observations is considered fully known, and the parameter vector is considered fully unknown. This, however, is not always the reality. Therefore, two modifications of the GMM have been considered, in particular. First, ‘stochastic prior information’ (p. i.) was added on the parameters, thereby creating the – still linear – Random Effects Model (REM) where the optimal determination of the parameters (random effects) is based on ‘Least Squares collocation’, showing higher precision as long as the p. i. was adequate (Wallace test). Secondly, the coefficient matrix was allowed to contain observed elements, thus leading to the – now nonlinear – Errors-In-Variables (EIV) Model. If not using iterative linearization, the optimal estimates for the parameters would be obtained by ‘Total Least Squares adjustment’ and with generally lower, but perhaps more realistic precision. Here the two concepts are combined, thus leading to the (nonlinear) ’EIV-Model with p. i.’, where an optimal estimation (resp. prediction) technique is developed under the name of ‘Total Least-Squares collocation’. At this stage, however, the covariance matrix of the data matrix – in vector form – is still being assumed to show a Kronecker product structure.
- Published
- 2020
- Full Text
- View/download PDF
30. Solution of the weighted symmetric similarity transformations based on quaternions.
- Author
-
Mercan, H., Akyilmaz, O., and Aydin, C.
- Subjects
- *
QUATERNIONS , *SIMILARITY transformations , *ERRORS-in-variables models , *HETEROSCEDASTICITY , *LEAST squares - Abstract
A new method through Gauss-Helmert model of adjustment is presented for the solution of the similarity transformations, either 3D or 2D, in the frame of errors-in-variables (EIV) model. EIV model assumes that all the variables in the mathematical model are contaminated by random errors. Total least squares estimation technique may be used to solve the EIV model. Accounting for the heteroscedastic uncertainty both in the target and the source coordinates, that is the more common and general case in practice, leads to a more realistic estimation of the transformation parameters. The presented algorithm can handle the heteroscedastic transformation problems, i.e., positions of the both target and the source points may have full covariance matrices. Therefore, there is no limitation such as the isotropic or the homogenous accuracy for the reference point coordinates. The developed algorithm takes the advantage of the quaternion definition which uniquely represents a 3D rotation matrix. The transformation parameters: scale, translations, and the quaternion (so that the rotation matrix) along with their covariances, are iteratively estimated with rapid convergence. Moreover, prior least squares (LS) estimation of the unknown transformation parameters is not required to start the iterations. We also show that the developed method can also be used to estimate the 2D similarity transformation parameters by simply treating the problem as a 3D transformation problem with zero (0) values assigned for the z-components of both target and source points. The efficiency of the new algorithm is presented with the numerical examples and comparisons with the results of the previous studies which use the same data set. Simulation experiments for the evaluation and comparison of the proposed and the conventional weighted LS (WLS) method is also presented. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
31. Accurate coupled lines fitting in an errors-in-variables framework.
- Author
-
Zhou, Yongjun, Gong, Jinghai, and Fang, Xing
- Subjects
- *
LIDAR , *REVERSE engineering , *METROLOGY , *GEOMETRY , *ALGORITHMS - Abstract
For the purpose of accurate measurement of regular polygons, the boundary lines with parallel, perpendicular or given angles are defined as coupled lines. Provided that the noisy data points are measured from each line without outliers, an accurate and numerical reliable weighted total least squares (WTLS) method is proposed for two dimensional coupled lines fitting task. The underlying problem is modelled within an errors-in-variables framework by assuming all the coordinates are subject to random errors. In order to overcome the possible ill-posedness, the lines are parameterised as constrained Hessian normal form instead of the intercept and slope one. A generic WTLS algorithm is derived in case of the random columns are corrupted by fully correlated errors. Special case that the data are corrupted by homocedastic errors are considered and solved with less computational expenses. A single line fitting and a simulated regular hexagon fitting examples are performed with comparisons and discussions. The proposed methods can be used for accurate regular polygon measurement in vision metrology or reverse engineering. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
32. Estimation and test of restricted linear EV model with nonignorable missing covariates.
- Author
-
Tang, Lin-jun, Zheng, Sheng-chao, and Zhou, Zhan-gong
- Abstract
This paper deals with estimation and test procedures for restricted linear errors-invariables (EV) models with nonignorable missing covariates. We develop a restricted weighted corrected least squares (WCLS) estimator based on the propensity score, which is fitted by an exponentially tilted likelihood method. The limiting distributions of the proposed estimators are discussed when tilted parameter is known or unknown. To test the validity of the constraints, we construct two test procedures based on corrected residual sum of squares and empirical likelihood method and derive their asymptotic properties. Numerical studies are conducted to examine the finite sample performance of our proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
33. A heteroscedastic measurement error model based on skew and heavy-tailed distributions with known error variances.
- Author
-
Tomaya, Lorena Cáceres and de Castro, Mário
- Subjects
- *
MEASUREMENT errors , *VARIANCES , *DISTRIBUTION (Probability theory) , *SKEWNESS (Probability theory) , *MAXIMUM likelihood statistics - Abstract
In this paper, we study inference in a heteroscedastic measurement error model with known error variances. Instead of the normal distribution for the random components, we develop a model that assumes a skew-t distribution for the true covariate and a centred Student's t distribution for the error terms. The proposed model enables to accommodate skewness and heavy-tailedness in the data, while the degrees of freedom of the distributions can be different. Maximum likelihood estimates are computed via an EM-type algorithm. The behaviour of the estimators is also assessed in a simulation study. Finally, the approach is illustrated with a real data set from a methods comparison study in Analytical Chemistry. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
34. A Minimax Approach to Errors-in-Variables Linear Models.
- Author
-
Golubev, Yu.
- Abstract
The paper considers a simple Errors-in-Variables (EiV) model Y
i = a + bXi + εξi ; Zi = Xi + σζi , where ξi , ζi are i.i.d. standard Gaussian random variables, Xi ∈ ℝ are unknown non-random regressors, and ε, σ are known noise levels. The goal is to estimates unknown parameters a, b ∈ ℝ based on the observations {Yi , Zi , i = 1, …, n}. It is well known [3] that the maximum likelihood estimates of these parameters have unbounded moments. In order to construct estimates with good statistical properties, we study EiV model in the large noise regime assuming that n → ∞, but ϵ2=nϵ∘2,σ2=nσ∘2 with some ϵ∘2,σ∘2>0. Under these assumptions, a minimax approach to estimating a, b is developed. It is shown that minimax estimates are solutions to a convex optimization problem and a fast algorithm for solving it is proposed. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
35. Solution for GNSS height anomaly fitting of mining area based on robust TLS.
- Author
-
Tao, Yeqing, Mao, Guangxiong, and Zhou, Xiaozhong
- Subjects
- *
GLOBAL Positioning System , *APPROXIMATION algorithms , *GEOLOGY , *NUMERICAL solutions to equations , *LAGRANGE equations - Abstract
Global navigation satellite system (GNSS) height solutions of mining area are readily contaminated by outliers because of the special geological environment. Additionally, GNSS height anomaly fitting model is a type of errors-in-variables model, and the traditional solution for parameter estimation does not account for error in the coefficient matrix. To solve these two problems, this paper presents a solution of the robust total least squares estimation for GNSS height anomaly fitting of mining area. Different from the traditional solution for robust estimation, an algorithm is established employing median method to obtain stable parameter values under the condition that observation data are highly contaminated. Employing Lagrange function and weight function, an iterative algorithm for the parameter estimation of GNSS anomaly fitting model is proposed, and the algorithm is verified using real data of mining area. The numerical results show that the proposed solution obtains stable parameter values when observation data are highly contaminated by outliers and demonstrate that the proposed algorithm is more accurate than traditional solutions for robust estimation. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
36. On the total least-squares estimation for autoregressive model.
- Author
-
Zeng, W., Fang, X., Lin, Y., Huang, X., and Zhou, Y.
- Subjects
- *
GLOBAL Positioning System , *LEAST squares , *VECTOR autoregression model , *AUTOREGRESSIVE models , *MATRIX analytic methods - Abstract
The classical Least-Squares (LS) adjustment has been widely used in processing and analysing observations from Global Satellite Navigation System (GNSS). However, in detecting temporal correlations of GNSS observations, which can be described by means of autoregressive (AR) process, the LS method may not provide reliable estimates of process coefficients, since the Yule-Walker (YW) equations refer to structured Errors-In-Variables (EIV) equations. In this contribution, we proposed a Total Least-Squares (TLS) solution with the singular cofactor matrix to solve the YW equations. The proposed TLS solution is obtained based on the fact that random errors belong to column space of its cofactor matrix. In addition the proposed solution does not need any substitution of the squared true parameter vector as done by the current publications. Finally, we simulate the AR process to prove that our solution is more reliable than the existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
37. 拟牛顿修正法解算不等式约束加权总体最小二乘问题.
- Author
-
王乐洋, 李海燕, and 陈晓勇
- Abstract
The errors-in-variables (EIV)model with inequality constraints is transformed into a standard nonlinear optimization program, which can be solved by existing optimization methods such as the active set method or sequential quadratic programming(SQP). Since weighted total least squares with inequality constraints (ICWTLS) is limited by the complexity of a Hessian matrix, which is the second partial derivative of objective function. In this paper, the Hessian matrix in SQP is replaced by an approximation based on Quasi-Newtonian method.The algorithm we propose can deal with the ICWTLS problem with a general weight matrix, and has the ability to handle large-scale problems. Eexamples illustrate that this new algorithm is efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
38. Bayesian Smoothing for Measurement Error Problems
- Author
-
Berry, Scott M., Carroll, Raymond J., Ruppert, David, Van Huffel, Sabine, editor, and Lemmerling, Philippe, editor
- Published
- 2002
- Full Text
- View/download PDF
39. Inferring causal directions from uncertain data.
- Author
-
Zhang, Yulai, Ma, Weifeng, and Luo, Guiming
- Subjects
- *
DATA analysis , *STATISTICAL correlation , *ERRORS-in-variables models , *CAUSAL models , *SAMPLING (Process) - Abstract
Causal knowledge discovery is an essential task in many disciplines. Inferring the knowledge of causal directions from the measurement data of two correlated variables is one of the most basic but non-trivial problems in the research of causal discovery. Most of the existing methods assume that at least one of the variables is strictly measured. In practice, uncertain data with observation error is widely exists and is unavoidable for both the cause and the effect. Correct causal relationships will be blurred by such noise. A causal direction inference method based on the errors-in-variables (EIV) model is proposed in this work. All variables are assumed to be measured with observation errors in the errors-in-variables models. Causal directions will be inferred by computing the correlation coefficients between the regression model functions and the probability density functions on both of the possible causal directions. Experiments are done on artificial data sets and the real world data sets to illustrate the performance of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
40. A new confidence interval in errors-in-variables model with known error variance.
- Author
-
Yan, Liang, Wang, Rui, and Xu, Xingzhong
- Subjects
- *
CONFIDENCE intervals , *CONFIDENCE regions (Mathematics) , *STATISTICAL sampling , *STATISTICAL hypothesis testing , *STATISTICAL models - Abstract
This paper considers constructing a new confidence interval for the slope parameter in the structural errors-in-variables model with known error variance associated with the regressors. Existing confidence intervals are so severely affected by Gleser–Hwang effect that they are subject to have poor empirical coverage probabilities and unsatisfactory lengths. Moreover, these problems get worse with decreasing reliability ratio which also result in more frequent absence of some existing intervals. To ease these issues, this paper presents a fiducial generalized confidence interval which maintains the correct asymptotic coverage. Simulation results show that this fiducial interval is slightly conservative while often having average length comparable or shorter than the other methods. Finally, we illustrate these confidence intervals with two real data examples, and in the second example some existing intervals do not exist. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
41. Generalized total least squares prediction algorithm for universal 3D similarity transformation.
- Author
-
Wang, Bin, Li, Jiancheng, Liu, Chao, and Yu, Jie
- Subjects
- *
LEAST squares , *SIMILARITY transformations , *GLOBAL Positioning System , *PARAMETER estimation , *COVARIANCE matrices - Abstract
Three–dimensional (3D) similarity datum transformation is extensively applied to transform coordinates from GNSS-based datum to a local coordinate system. Recently, some total least squares (TLS) algorithms have been successfully developed to solve the universal 3D similarity transformation problem (probably with big rotation angles and an arbitrary scale ratio). However, their procedures of the parameter estimation and new point (non-common point) transformation were implemented separately, and the statistical correlation which often exists between the common and new points in the original coordinate system was not considered. In this contribution, a generalized total least squares prediction (GTLSP) algorithm, which implements the parameter estimation and new point transformation synthetically, is proposed. All of the random errors in the original and target coordinates, and their variance–covariance information will be considered. The 3D transformation model in this case is abstracted as a kind of generalized errors–in–variables (EIV) model and the equation for new point transformation is incorporated into the functional model as well. Then the iterative solution is derived based on the Gauss–Newton approach of nonlinear least squares. The performance of GTLSP algorithm is verified in terms of a simulated experiment, and the results show that GTLSP algorithm can improve the statistical accuracy of the transformed coordinates compared with the existing TLS algorithms for 3D similarity transformation. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
42. Fiducial inference in the classical errors-in-variables model.
- Author
-
Yan, Liang, Wang, Rui, and Xu, Xingzhong
- Subjects
- *
ERRORS-in-variables models , *CONFIDENCE intervals , *ASYMPTOTIC controllability , *INTERVAL analysis , *EMPIRICAL research - Abstract
For the slope parameter of the classical errors-in-variables model, existing interval estimations with finite length will have confidence level equal to zero because of the Gleser-Hwang effect. Especially when the reliability ratio is low and the sample size is small, the Gleser-Hwang effect is so serious that it leads to the very liberal coverages and the unacceptable lengths of the existing confidence intervals. In this paper, we obtain two new fiducial intervals for the slope. One is based on a fiducial generalized pivotal quantity and we prove that this interval has the correct asymptotic coverage. The other fiducial interval is based on the method of the generalized fiducial distribution. We also construct these two fiducial intervals for the other parameters of interest of the classical errors-in-variables model and introduce these intervals to a hybrid model. Then, we compare these two fiducial intervals with the existing intervals in terms of empirical coverage and average length. Simulation results show that the two proposed fiducial intervals have better frequency performance. Finally, we provide a real data example to illustrate our approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
43. Asymptotic normality and mean consistency of LS estimators in the errors-in-variables model with dependent errors
- Author
-
Hongchang Hu, Yuncai Yu, Xinsheng Liu, and Yu Zhang
- Subjects
negatively superadditive dependent ,62e20 ,strong law of large numbers ,General Mathematics ,asymptotic normality ,mean consistency ,010102 general mathematics ,Asymptotic distribution ,Estimator ,01 natural sciences ,010104 statistics & probability ,Consistency (statistics) ,Law of large numbers ,62j05 ,errors-in-variables model ,60f05 ,QA1-939 ,Errors-in-variables models ,Applied mathematics ,0101 mathematics ,62f12 ,Geometry and topology ,Mathematics - Abstract
In this article, an errors-in-variables regression model in which the errors are negatively superadditive dependent (NSD) random variables is studied. First, the Marcinkiewicz-type strong law of large numbers for NSD random variables is established. Then, we use the strong law of large numbers to investigate the asymptotic normality of least square (LS) estimators for the unknown parameters. In addition, the mean consistency of LS estimators for the unknown parameters is also obtained. Some results for independent random variables and negatively associated random variables are extended and improved to the case of NSD setting. At last, two simulations are presented to verify the asymptotic normality and mean consistency of LS estimators in the model.
- Published
- 2020
44. Two models for linear comparative calibration
- Author
-
Wimmer G. and Witkovský V.
- Subjects
calibration problem ,multiple-use calibration ,maximum likelihood estimator ,errors-in-variables model ,kenward-roger approximation ,Technology - Abstract
We consider the comparative calibration problem in the case when linear relationship is assumed between two considered measuring devices with possibly different units and precisions. The first method for obtaining the approximate confidence region for unknown parameters of the calibration line applies the maximum likelihood estimators of the unknown parameters. The second method is based on estimation of the calibration line via replicated errors-in-variables model. Essential point in this approach is approximation of the small sample distribution of the Wald-type test statistic. This enables to construct the interval estimators for the multiple-use calibration case.
- Published
- 2012
- Full Text
- View/download PDF
45. A New Technique to Solve the Unidentifiablity Problem of Linear Structural Relationship Model.
- Author
-
Al Mamun, Abu Sayed Md., Zubairi, Yong Zulina, Hussin, Abdul Ghapor, and Imon, A. H. M. Rahmatullah
- Subjects
- *
LINEAR statistical models , *MAXIMUM likelihood statistics , *NONPARAMETRIC estimation , *DIRECTION field (Mathematics) , *STATISTICAL research - Abstract
Several techniques have been used to solve the unidentifiability problem of Linear structural relationship model. Most of them assumed either the error variance σ²δ or σ²ε is known or both are known or the ratio of them is known and then can be used to estimate the rest of parameters. In this study, we assume the slope parameter, β is known and then derive the maximum likelihood estimate (MLE) for rest of the parameters. In fact, the slope is estimated separately by a nonparametric method and assumed to be known when the rest of the parameters are estimated by maximum likelihood method. We obtain closed-form estimates of parameters and their variances and covariances. Using a simulation study, we showed that the estimated values of the parameters are unbiased and consistent. Finally, this method is illustrated using real data set. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
46. Exact finite-sample bias and MSE reduction in a simple linear regression model with measurement error
- Author
-
Tsukuma, Hisayuki
- Published
- 2019
- Full Text
- View/download PDF
47. Unitarily invariant errors-in-variables estimation.
- Author
-
Pešta, Michal
- Subjects
ERRORS-in-variables models ,ESTIMATION theory ,MEASUREMENT errors ,LEAST squares ,MATHEMATICAL invariants - Abstract
Linear relations, containing measurement errors in the input and output data, are considered. Parameters of these errors-in-variables (EIV) models can be estimated by minimizing the total least squares (TLS) of the input-output disturbances, i.e., penalizing the orthogonal squared misfit. This approach corresponds to minimizing the Frobenius norm of the error matrix. An extension of the traditional TLS estimator in the EIV model-the EIV estimator-is proposed in the way that a general unitarily invariant norm of the error matrix is minimized. Such an estimator is highly non-linear. Regardless of the chosen unitarily invariant matrix norm, the corresponding EIV estimator is shown to coincide with the TLS estimator. Its existence and uniqueness is discussed. Moreover, the EIV estimator is proved to be scale invariant, interchange, direction, and rotation equivariant. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
48. A mixed weighted least squares and weighted total least squares adjustment method and its geodetic applications.
- Author
-
Zhou, Y. and Fang, X.
- Subjects
- *
LEAST squares , *MATHEMATICAL statistics , *DEPENDENCE (Statistics) , *ERRORS , *CURVE fitting - Abstract
A mixed weighted least squares (WLS) and weighted total least squares (WTLS) (mixed WLS–WTLS) method is presented for an errors-in-variables (EIV) model with some fixed columns in the design matrix. The numerical computational scheme and an approximate accuracy assessment method are also provided. It is extended from the mixed Least squares (LS)–Total least squares (TLS) method to deal with the case that the random columns are corrupted by heteroscedastic correlated noises. The mixed WLS–WTLS method can improve the computational efficiency compared with the existing WTLS methods without loss of accuracy, particularly when the fixed columns are far more than random ones. The Bursa transformation and parallel lines fitting examples are carried out to demonstrate the performance of the proposed algorithm. Since the mixed WLS–WTLS problem includes both the WLS and the WTLS problem, it will have a more wide range of applications. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
49. On the errors-in-variables model with equality and inequality constraints for selected numerical examples.
- Author
-
Fang, Xing and Wu, Yun
- Subjects
- *
ERRORS-in-variables models , *GEODESY , *CONSTRAINT algorithms , *MATHEMATICAL models , *NUMERICAL analysis - Abstract
It is well known that the errors-in-variables (EIV) model has been treated as a special case of the traditional geodetic model, the nonlinear Gauss-Helmert model (GHM), for more than a century. In this contribution, an adjustment of the EIV model with equality and inequality constraints is investigated based on the nonlinear GHM. In each iteration, the constrained EIV model is linearized to form a quadratic program. Furthermore, the precision description is investigated for the mixed constrained problem. The demonstrated results from the numerical examples show that this approach avoids the large computational expenses of the existing combinatorial solution that normally accompany the number of inequality constraints. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
50. The effect of errors-in-variables on variance component estimation.
- Author
-
Xu, Peiliang
- Subjects
- *
ERRORS-in-variables models , *LEAST squares , *VARIANCES , *NONLINEAR estimation , *PARAMETER estimation - Abstract
Although total least squares (TLS) has been widely applied, variance components in an errors-in-variables (EIV) model can be inestimable under certain conditions and unstable in the sense that small random errors can result in very large errors in the estimated variance components. We investigate the effect of the random design matrix on variance component (VC) estimation of MINQUE type by treating the design matrix as if it were errors-free, derive the first-order bias of the VC estimate, and construct bias-corrected VC estimators. As a special case, we obtain a bias-corrected estimate for the variance of unit weight. Although TLS methods are statistically rigorous, they can be computationally too expensive. We directly Taylor-expand the nonlinear weighted LS estimate of parameters up to the second-order approximation in terms of the random errors of the design matrix, derive the bias of the estimate, and use it to construct a bias-corrected weighted LS estimate. Bearing in mind that the random errors of the design matrix will create a bias in the normal matrix of the weighted LS estimate, we propose to calibrate the normal matrix by computing and then removing the bias from the normal matrix. As a result, we can obtain a new parameter estimate, which is called the N-calibrated weighted LS estimate. The simulations have shown that (i) errors-in-variables have a significant effect on VC estimation, if they are large/significant but treated as non-random. The variance components can be incorrectly estimated by more than one order of magnitude, depending on the nature of problems and the sizes of EIV; (ii) the bias-corrected VC estimate can effectively remove the bias of the VC estimate. If the signal-to-noise is small, higher order terms may be necessary. Nevertheless, since we construct the bias-corrected VC estimate by directly removing the estimated bias from the estimate itself, the simulation results have clearly indicated that there is a great risk to obtain negative values for variance components. VC estimation in EIV models remains difficult and challenging; and (iii) both the bias-corrected weighted LS estimate and the N-calibrated weighted LS estimate obviously outperform the weighted LS estimate. The intuitively N-calibrated weighted LS estimate is computationally less expensive and shown to statistically perform even better than the bias-corrected weighted LS estimate in producing an almost unbiased estimate of parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.