186 results on '"Matrix form"'
Search Results
2. Approximating of conic sections by DP curves with endpoint interpolation.
- Author
-
Bakhshesh, Davood and Davoodi, Mansoor
- Subjects
APPROXIMATION theory ,POLYNOMIALS ,INDUSTRIAL design ,NUMERICAL analysis ,LEAST squares ,CONIC sections - Abstract
Conic sections have many applications in industrial design, however, they cannot be exactly represented in polynomial form. Hence approximating conic sections with polynomials is a challenging problem. In this paper, we use the monomial form of Delgado and Peña (DP) curves and present a matrix representation for them. Using the matrix form and the least squares method, we propose a simple and efficient algorithm for approximating conic sections by DP curves of arbitrary degree with endpoint interpolation. Finally, we test and compare the proposed algorithm on some numerical examples which validates and confirms efficiency of it. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
3. Robust estimation method for panel interval-valued data model with fixed effects.
- Author
-
Zhang, Jinjin, Li, Qingqing, Wei, Bowen, and Ji, Aibing
- Subjects
PANEL analysis ,MONTE Carlo method ,LEAST squares ,DATA modeling ,PREDICTION models ,MEASUREMENT errors ,FIXED effects model - Abstract
Panel data model with fixed effects is widely used in economic and administrative applications. However, the presence of factors: measurement errors, data variability and outliers may potentially decrease the accuracy of the model prediction. In this paper, we use panel interval-valued data to represent measurement errors and data volatility of observations. Further, we propose a corresponding panel interval-valued data model with fixed effects, in which both the response and explanatory variables are interval-valued data. To reduce the impact of outliers on our model, we propose a robust estimation method based on the iterative weighted least squares technique. Later, Monte Carlo simulation and empirical application demonstrate that our model is a suitable tool for analyzing the behaviour of panel interval-valued data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Penalized Structural Equation Models.
- Author
-
Asparouhov, Tihomir and Muthén, Bengt
- Subjects
STRUCTURAL equation modeling ,GROWTH curves (Statistics) ,LEAST squares - Abstract
Penalized structural equation models (PSEM) is a new powerful estimation technique that can be used to tackle a variety of difficult structural estimation problems that can not be handled with previously developed methods. In this paper we describe the PSEM framework and illustrate the quality of the method with simulation studies. Maximum-likelihood and weighted least squares PSEM estimation is discussed for SEM models with continuous and categorical variables. We show that traditional EFA, multiple group alignment (MGA), and Bayesian SEM (BSEM) are examples of PSEM. The PSEM framework also extends standard SEM models with the possibility to structurally align various model parameters. Exploratory latent growth models, also referred to as Tuckerized curve models, can also be estimated in the PSEM framework and are illustrated here with simulation studies and an empirical example. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. A note on the reverse order law for least square g -inverse of operator product.
- Author
-
Xiong, Zhiping and Qin, Yingying
- Subjects
LEAST squares ,INVERSE problems ,OPERATOR theory ,MATHEMATICAL bounds ,LINEAR operators ,INFINITY (Mathematics) - Abstract
In this paper, we study the reverse order law for the least square-inverse of an operator productusing the technique of matrix form of bounded linear operators. In particular, some necessary and sufficient conditions for theinclusionsandare presented. Moreover, some finite dimensional results are extended to infinite dimensional settings. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
6. Moving target positioning algorithm based on multidimensional scaling analysis from TDOA and FDOA with sensor uncertainties.
- Author
-
Ahmed, Hesham Ibrahim
- Subjects
MULTIDIMENSIONAL scaling ,POSITION sensors ,SENSOR placement ,MATRIX multiplications ,LEAST squares - Abstract
The problem of moving target localization from range and velocity difference measurements has attracted considerable attention in recent years. In this article, a novel weighted multidimensional scaling (MDS) algorithm is proposed to estimate the position and velocity of a moving target by utilizing the time difference of arrival (TDOA) and frequency difference of arrival (FDOA) measurements with sensor position and velocity errors. The proposed estimator is based on the optimization of a cost function related to the scalar product matrix in classical MDS. The estimator is accurate and closed form. The algorithm has a small mean square error compared with the 2-step weighted least squares (LS) algorithm in a moderate and high noise power level. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. An efficient iterative reweighted least-squares algorithm for two-dimensional FIR filters design in the L[sub p] sense.
- Author
-
Jou, Yue-Dar, Hsieh, Chaur-Heh, and Kuo, Chung-Ming
- Subjects
ALGORITHMS ,LEAST squares ,FILTERS (Mathematics) - Abstract
This paper presents an efficient iterative reweighted least-squares (IRLS) algorithm to obtain an L[sub p] approximation for the design of two-dimensional FIR filters. This algorithm introduces an extra frequency response which implicitly includes the weighting function such that the p-power error to be minimized can be represented in a twodimensional matrix form. The proposed algorithm reduces the computational complexity from O(N[sup 6]) to O(N[sup 3]), and storage space from O(N[sup 4]) to O(N[sup 2]), compared with the conventional IRLS algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
8. A robust scalar-on-function logistic regression for classification.
- Author
-
Mutis, Muge, Beyaztas, Ufuk, Simsek, Gulhayat Golbasi, and Shang, Han Lin
- Subjects
LEAST squares ,LOGISTIC regression analysis ,STRAWBERRIES - Abstract
Scalar-on-function logistic regression, where the response is a binary outcome and the predictor consists of random curves, has become a general framework to explore a linear relationship between the binary outcome and functional predictor. Most of the methods used to estimate this model are based on the least-squares type estimators. However, the least-squares estimator is seriously hindered by outliers, leading to biased parameter estimates and an increased probability of misclassification. This paper proposes a robust partial least squares method to estimate the regression coefficient function in the scalar-on-function logistic regression. The regression coefficient function represented by functional partial least squares decomposition is estimated by a weighted likelihood method, which downweighs the effect of outliers in the response and predictor. The estimation and classification performance of the proposed method is evaluated via a series of Monte Carlo experiments and a strawberry puree data set. The results obtained from the proposed method are compared favorably with existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Camera calibrationmethod based on circular array calibration board.
- Author
-
Haifeng Chen, Jinlei Zhuang, Bingyou Liu, Lichao Wang, and Luxian Zhang
- Subjects
CALIBRATION ,CAMERA calibration ,LEAST squares ,IMAGE processing - Abstract
Camera calibration will directly affect the accuracy and stability of the whole measurement system. According to the characteristics of circular array calibration plate, a camera calibration method based on circular array calibration plate is proposed in this paper. Firstly, subpixel edge detection algorithm is used for image preprocessing. Then, according to cross ratio invariance and geometric constraints, the projection point position of the center point is obtained. Finally, the calibration experiment was carried out. Experimental results show that under any illumination conditions, the average reprojection error of the center coordinates obtained by the improved calibration algorithm is less than 0.12 pixels, which is better than the traditional camera calibration algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. A General Modeling Framework for Network Autoregressive Processes.
- Author
-
Yin, Hang, Safikhani, Abolfazl, and Michailidis, George
- Subjects
ASYMPTOTIC distribution ,LEAST squares ,AIR pollution ,NETWORK performance - Abstract
A general flexible framework for Network Autoregressive Processes (NAR) is developed, wherein the response of each node in the network linearly depends on its past values, a prespecified linear combination of neighboring nodes and a set of node-specific covariates. The corresponding coefficients are node-specific, and the framework can accommodate heavier than Gaussian errors with spatial-autoregressive, factor-based, or in certain settings general covariance structures. We provide a sufficient condition that ensures the stability (stationarity) of the underlying NAR that is significantly weaker than its counterparts in previous work in the literature. Further, we develop ordinary and (estimated) generalized least squares estimators for both fixed, as well as diverging numbers of network nodes, and also provide their ridge regularized counterparts that exhibit better performance in large network settings, together with their asymptotic distributions. We derive their asymptotic distributions that can be used for testing various hypotheses of interest to practitioners. We also address the issue of misspecifying the network connectivity and its impact on the aforementioned asymptotic distributions of the various NAR parameter estimators. The framework is illustrated on both synthetic and real air pollution data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Iteratively reweighted least square for kernel expectile regression with random features.
- Author
-
Cui, Yue and Zheng, Songfeng
- Subjects
LEAST squares ,QUADRATIC programming ,KERNEL functions ,TRAINING needs - Abstract
To overcome the computational burden of quadratic programming in kernel expectile regression (KER), iteratively reweighted least square (IRLS) technique was introduced in literature, resulting in IRLS-KER. However, for nonlinear models, IRLS-KER involves operations with matrices and vectors of the same size as the training set. Thus, as the training set becomes large, nonlinear IRLS-KER needs a long training time and large memory. To further alleviate the training cost, this paper projects the original data into a low-dimensional space via random Fourier feature. The inner product of the random Fourier features of two data points is approximately the same as the kernel function evaluated at these two data points. Hence, it is possible to use a linear model in the new low-dimensional space to approximate the original nonlinear model, and consequently, the time/memory efficient linear training algorithms could be applied. This paper applies the idea of random Fourier features to IRLS-KER, and our testing results on simulated and real-world datasets show that, the introduction of random Fourier features makes IRLS-KER achieve similar prediction accuracy as the original nonlinear version with substantially higher time efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Evaluation of Goodness-of-Fit Tests in Random Intercept Cross-Lagged Panel Model: Implications for Small Samples.
- Author
-
Zheng, Bang Quan and Valente, Matthew J.
- Subjects
MAXIMUM likelihood statistics ,MONTE Carlo method ,GOODNESS-of-fit tests ,LEAST squares ,PANEL analysis - Abstract
The development of the random intercept cross-lagged panel model (RI-CLPM) is an extension of the traditional cross-lagged panel model (CLPM), which aims to study between and within person variances in longitudinal data. Despite its growing popularity in behavioral and social sciences, our understanding of goodness-of-fit tests of RI-CLPMs is limited. Using Monte Carlo simulations across different sample sizes and model complexity, this study evaluates goodness-of-fit tests applied to RI-CLPMs by comparing the test statistics of the maximum likelihood (ML), generalized least squares (GLS), and reweighted least squares (RLS), as well as their corresponding NFI, CFI, and RMSEA. Our results showed that when N was significantly large; ML, GLS, and RLS tended to have similar performances. When N was small relative to the model complexity, RLS outperformed ML and GLS and produced consistent χ d f 2 test statistics and fit indices. These results have implications for fitting RI-CLPM with finite data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Inference for partially linear additive higher-order spatial autoregressive model with spatial autoregressive error and unknown heteroskedasticity.
- Author
-
Zhang, Yuanqing, Li, Hong, and Feng, Yaqin
- Subjects
AUTOREGRESSIVE models ,GENERALIZED method of moments ,LEAST squares ,INFERENTIAL statistics ,ECONOMETRIC models ,AUTOREGRESSION (Statistics) - Abstract
This article extends spatial autoregressive model with spatial autoregressive disturbances (SARAR(1,1)) which is the most popular spatial econometric model to the case of an arbitrary finite number of nonparametric additive terms and spatial autoregressive models with spatial autoregressive disturbances of arbitrary finite order (SARAR(R,S)). We propose a sieve two-stage least squares (S2SLS) regression and generalized method of moments (GMM) procedure of the high-order spatial autoregressive parameters of the disturbance process. Under some sufficient conditions, we show that the proposed estimator for the finite dimensional parameter is n consistent and asymptotically normally distributed. We show that each proposed estimator for the additive terms is consistent and also asymptotically normally distributed at a rate slower than n. Consistent estimators for the asymptotic variances of the proposed estimators are provided. In addition, using asymptotic properties to make statistical inference for the parametric and additive is also considered. Monte Carlo evidence suggests that the estimation procedure performs reasonably well in small samples and the proposed approach has some practical value. The proposed method is applied to analyzing factors which affect haze pollution. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Determination of Helmert transformation parameters for continuous GNSS networks: a case study of the Géoazur GNSS network.
- Author
-
Tran, Dinh Trong, Nocquet, Jean-Mathieu, Luong, Ngoc Dung, and Nguyen, Dinh Huy
- Subjects
GLOBAL Positioning System ,STANDARD deviations ,LEAST squares ,OUTLIERS (Statistics) ,COORDINATE transformations - Abstract
In this paper, we propose an approach to determine seven parameters of the Helmert transformation by transforming the coordinates of a continuous GNSS network from the World Geodetic System 1984 (WGS84) to the International Terrestrial Reference Frame. This includes (1) converting the coordinates of common points from the global coordinate system to the local coordinate system, (2) identifying and eliminating outliers by the Dikin estimator, and (3) estimating seven parameters of the Helmert transformation by least squares (LS) estimation with the "clean" data (i.e. outliers removed). Herein, the local coordinate system provides a platform to separate points' horizontal and vertical components. Then, the Dikin estimator identifies and eliminates outliers in the horizontal or vertical component separately. It is significant because common points in a continuous GNSS network may contain outliers. The proposed approach is tested with the Géoazur GNSS network with the results showing that the Dikin estimator detects outliers at 6 out of 18 common points, among which three points are found with outliers in the vertical component only. Thus, instead of eliminating all coordinate components of these six common points, we only eliminate all coordinate components of three common points and only the vertical component of another three common points. Finally, the classical LS estimation is applied to "clean" data to estimate seven parameters of the Helmert transformation with a significant accuracy improvement. The Dikin estimator's results are compared to those of other robust estimators of Huber and Theil-Sen, which shows that the Dikin estimator performs better. Furthermore, the weighted total least-squares estimation is implemented to assess the accuracy of the LS estimation with the same data. The inter-comparison of the seven estimated parameters and their standard deviations shows a small difference at a few per million levels (E-6). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. GLS estimation and confidence sets for the date of a single break in models with trends.
- Author
-
Beutner, Eric, Lin, Yicong, and Smeekes, Stephan
- Subjects
LEAST squares ,MATRIX inversion ,CONFIDENCE ,CONFIDENCE intervals ,LIKELIHOOD ratio tests - Abstract
We develop a Feasible Generalized Least Squares estimator of the date of a structural break in level and/or trend. The estimator is based on a consistent estimate of a T-dimensional inverse autocovariance matrix. A cubic polynomial transformation of break date estimates can be approximated by a nonstandard yet nuisance parameter free distribution asymptotically. The new limiting distribution captures the asymmetry and bimodality in finite samples and is applicable for inference with a single, known, set of critical values. We consider the confidence intervals/sets for break dates based on both Wald-type tests and by inverting multiple likelihood ratio (LR) tests. A simulation study shows that the proposed estimator increases the empirical concentration probability in a small neighborhood of the true break date and potentially reduces the mean squared errors. The LR-based confidence intervals/sets have good coverage while maintaining informative length even with highly persistent errors and small break sizes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Residual Structural Equation Models.
- Author
-
Asparouhov, Tihomir and Muthén, Bengt
- Subjects
STRUCTURAL equation modeling ,LEAST squares ,LATENT variables ,STRUCTURAL models ,INTEGRATED software - Abstract
The residual variables in a structural equation model can be used to create a secondary structural model which we call the residual structural equation model (RSEM). We describe the maximum-likelihood, weighted least squares and Bayesian estimations for RSEM. The methodology is illustrated with several examples and simulation studies. We discuss the implementation of RSEM in the Mplus software package and provide scripts for the simulation studies. The RSEM framework is utilized to estimate and simplify popular models such as the random intercept cross-lagged panel model (RI-CLPM) and the latent curve model with structured residuals (LCM-SR). We discuss in details RSEM models with categorical observed variables as well as categorical latent variables in the context of mixture modeling. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. Non-linear block least-squares adjustment for a large number of observations.
- Author
-
Mahboub, Vahid and Ebrahimzadeh, Somayeh
- Subjects
NONLINEAR equations ,NUMBER systems ,LEAST squares - Abstract
In this contribution two algorithms are developed to solve non-linear system of equations which can contain a large number of measurements. These algorithms are based on nonlinear block least-squares (BLS). Although block least squares was investigated by some researchers, the non-linear case was not examined by now. The first algorithm is proposed to solve a special case of non-linear problems that do not require linearization. Such an algorithm can be called total block least-squares. The second algorithm is based on linearization within a general nonlinear mixed model using a new notation which is in agreement with the rigorous linearization presented by Pope. Both of these algorithms can handle constraints on the parameters. By use of these algorithms, big data processing is feasible with inexpensive computers. Furthermore, expensive processors can solve systems with a large number of equations faster. Two case studies with more than 120,000 equations show that fast and accurate computations are possible by applying these algorithms without any loss of accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. Application of Mean-covariance Regression Methods for Estimation of EDP|IM Distributions for Small Record Sets.
- Author
-
Ghods, Babak and Rofooei, Fayaz R.
- Subjects
PHYSICAL distribution of goods ,QUANTILE regression ,LEAST squares ,EARTHQUAKE engineering - Abstract
The performance of several regression methods is investigated to estimate the distribution of engineering demand parameters conditioned on intensity measures (EDP|IM) for small record sets. In particular, the performance of the multivariate ordinary least squares (OLS), a simultaneous mean-variance regression (MVR) done by a penalized weighted least-square loss function, and a mean-covariance/variance regression based on expectation maximization method (EM) are assessed. The efficiency of the introduced methods is compared with FEMA-P58 methodology. Performance assessment of EM and MVR methods shows that the overall increase in efficiency is about 25–45% for maximum inter-story drift ratios, and 30–50% for maximum absolute floor acceleration. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. Estimation of Low Rank High-Dimensional Multivariate Linear Models for Multi-Response Data.
- Author
-
Zou, Changliang, Ke, Yuan, and Zhang, Wenyang
- Subjects
DATA modeling ,SAMPLE size (Statistics) ,HIGH-dimensional model representation ,LEAST squares - Abstract
In this article, we study low rank high-dimensional multivariate linear models (LRMLM) for high-dimensional multi-response data. We propose an intuitively appealing estimation approach and develop an algorithm for implementation purposes. Asymptotic properties are established to justify the estimation procedure theoretically. Intensive simulation studies are also conducted to demonstrate performance when the sample size is finite, and a comparison is made with some popular methods from the literature. The results show the proposed estimator outperforms all of the alternative methods under various circumstances. Finally, using our suggested estimation procedure we apply the LRMLM to analyze an environmental dataset and predict concentrations of PM2.5 at the locations concerned. The results illustrate how the proposed method provides more accurate predictions than the alternative approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. Jackknife method for the location of gross errors in weighted total least squares.
- Author
-
Wang, Leyang, Li, Zhiqiang, and Yu, Fengbin
- Subjects
LEAST squares ,POCKETKNIVES ,OUTLIER detection ,COORDINATE transformations ,UNITS of time ,MODEL airplanes ,PARAMETER estimation - Abstract
Because the weighted total least squares (WTLS) method lacks robustness and is sensitive to gross errors, it cannot eliminate the influence of outliers effectively. A small number of gross errors may produce a devastating effect on estimates. Focusing on the limitation of the WTLS method regarding the influence of gross errors, this work combines Jackknife resampling theory with the WTLS algorithm for the identification and detection of outliers. The gross errors located in the WTLS method by the Jackknife method are identified to further improve the qualities of the estimated values if the observation data are falsified by outliers. This paper focuses on the following two aspects: only one gross error and multiple gross errors. Detailed calculation steps and the whole procedure for outlier detection using the new method are given. This algorithm is applied to the straight-line fitting model and the plane coordinate transformation model. From the experimental estimation results, we can see that the method proposed in this paper can identify gross errors that are greater than or equal to three times the standard error, and obtain more accurate estimation values when compared with the WTLS method and the classic robust weighted total least squares (RWTLS) method. The numerical case studies verify the effectiveness and practicality of the proposed procedure. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. Multiple Degree Reduction and Elevation of Bézier Curves Using Jacobi-Bernstein Basis Transformations.
- Author
-
Rababah, Abedallah, Lee, Byung-Gook, and Yoo, Jaechil
- Subjects
MATHEMATICAL transformations ,BERNSTEIN polynomials ,CURVES ,JACOBI polynomials ,LEAST squares ,APPROXIMATION theory ,ORTHOGONAL polynomials - Abstract
In this article, we find the optimal r times degree reduction of Bézier curves with respect to the Jacobi-weighted L2-norm on the interval [0, 1]. This method describes a simple and efficient algorithm based on matrix computations. Also, our method includes many previous results for the best approximation with L1, L2, and L∞-norms. We give some examples and figures to demonstrate these methods. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
22. On estimation in varying coefficient models for sparse and irregularly sampled functional data.
- Author
-
Mostafaiy, Behdad
- Subjects
BILIARY liver cirrhosis ,HILBERT space ,STOCHASTIC processes ,LEAST squares - Abstract
In this paper, we study a smoothness regularization method for a varying coefficient model based on sparse and irregularly sampled functional data which is contaminated with some measurement errors. We estimate the one-dimensional covariance and cross-covariance functions of the underlying stochastic processes based on a reproducing kernel Hilbert space approach. We then obtain least squares estimates of the coefficient functions. Simulation studies demonstrate that the proposed method has good performance. We illustrate our method by an analysis of longitudinal primary biliary liver cirrhosis data. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. A robust meta-heuristic adaptive Bi-CGSTAB algorithm to online estimation of a three DoF state–space model in the presence of disturbance and uncertainty.
- Author
-
Hosseini, Shahram, Navabi, M., and Hajarian, Masoud
- Subjects
ONLINE algorithms ,METAHEURISTIC algorithms ,LEAST squares ,KRYLOV subspace ,KALMAN filtering ,SYSTEM dynamics ,HYPERSONIC aerodynamics - Abstract
Most control systems require a fairly accurate model of dynamic system to design or implement a controller. When system dynamics change, the dynamic model must undergo online or offline re-estimation. The online model estimation algorithms in the time domain, especially for large models in presence of sensor noise, model uncertainty and external disturbance are almost inaccurate and unstable. In this paper, based on the dynamic model characteristics a novel online robust meta-heuristic adaptive Bi Conjugate Gradient Stabilized (Bi-CGSTAB) algorithm is proposed to estimate the model parameters and attitude simultaneously. First, the model is estimated iteratively using the output of attitude estimation from the Kalman filter algorithm, and the attitude is estimated by the output of estimated model from the least square method. The estimation method focuses on the solving algorithm of the matrix equations of the model estimation. The online robust meta-heuristic adaptive Bi-CGSTAB method uses the information of previous iteration in the current iteration to set the solving-steps toward the local optimums. This method leads to a broader and more intelligent search in the Krylov subspace of answers. The numerical results show a higher performance, robustness and more accurate model estimation than the other stated methods in the paper. 50 % [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. Numerical Simulation of Unsteady Channel Flow with a Moving Indentation Using Solution Dependent Weighted Least Squares Based Gradients Calculations over Unstructured Mesh.
- Author
-
Sonawane, Chandrakant R., More, Yogesh B., and Pandey, Anand Kumar
- Subjects
CHANNEL flow ,LEAST squares ,FLUID-structure interaction ,RADIAL basis functions ,FLUID flow ,UNSTEADY flow - Abstract
Fluid-structure interaction problem – unsteady channel flow with a moving indentation problem, which represents flow features of oscillating stenosis of a blood vessel, is numerically simulated here. The flow phenomenon inside the channel with a moving boundary is found to be unsteady and complex mainly due to the presence of the moving boundary and its interaction with flowing fluid. In this article, a high order accurate Harten–Lax and van Leer with contact for artificial compressibility Riemann solver has been used for flow computation. The Riemann solver is modified to incorporate arbitrarily Lagrangian–Eulerian (ALE) formulation to take care of mesh movement in the computation, where radial basis function is used for dynamically moving the mesh. Higher-order accuracy over unstructured meshes is achieved using quadratic solution reconstruction based on solution dependent weighted least squares (SDWLS) based gradient calculation. The present numerical scheme is validated here and the numerical results produced are found to agree with experimental as well as numerical results reported in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. Dynamic performance-based automatic generation control unit allocation with frequency sensitivity identification.
- Author
-
Zhang, Jingyi, Lu, Chao, and Song, Jie
- Subjects
ELECTRIC power systems ,MANUFACTURING industries ,ELECTRICITY ,LEAST squares ,SIMULATION methods & models ,POWER resources ,ELECTRIC generators ,MANAGEMENT - Abstract
With the alteration of energy structures in power system, the allocation of automatic generation control (AGC) is facing new challenges. The emergence of high penetration of manufacturing sectors and renewable energy sources has increased the demand for faster-ramping resources to participate in the frequency regulation service. Additionally, the current regulation service does not properly arrange the output of the resources considering the actual performance while they follow the AGC allocation signals, which affects the accuracy of frequency regulation. The fast-ramping capacity and response accuracy of AGC units are supposed to be considered in the dispatch. Meanwhile, the power outputs of different governors have different impacts on the system frequency, which has important guiding significance for the AGC dispatch. With the purpose of improving frequency regulation service, this paper proposes a dynamic performance-based dispatch model considering the above issues. We first prove that there is a linear relation between the output of the generators and system frequency, defined as frequency sensitivity. Then, the frequency sensitivity of each generator can be identified using the least square (LS) method. Furthermore, a dynamic multi-objective optimization allocation model is established, which considers the units’ economy, ramping capacity and accuracy. Finally, the proposed identification method and allocation model are simulated in the IEEE-9 bus system, and the simulation results verify their validity and feasibility [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
26. Rational (Padé) approximation for estimating the components of the partially-linear regression model.
- Author
-
Aydın, Dursun, Yılmaz, Ersin, and Chamidah, Nur
- Subjects
REGRESSION analysis ,LEAST squares ,LINEAR equations ,LINEAR systems ,MULTICOLLINEARITY ,SPLINES - Abstract
This paper proposes a new smoothing technique based on rational function approximation using truncated total least squares ( P − T T L S) and compares it with the widely used smoothing spline method, which has become a very powerful smoothing technique in the semiparametric regression setting. Due to the nature of rational approximation, it generates a system of linear equations with multi-collinearities and errors in all its variables. The proposed method is mainly designed to deal with these problems, especially for solving error-contaminated systems and ill-conditioned issues. To indicate the ability of the proposed method, we perform simulation experiments under different conditions and employ a real-world data application. The outcomes from the studies show that the model parameters estimated by P − T T L S have lower variances than benchmarked the smoothing spline ( B − S S) technique. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
27. The red indicator and corrected VIFs in generalized linear models.
- Author
-
Özkale, M. Revan
- Subjects
MULTICOLLINEARITY ,MAXIMUM likelihood statistics ,REGRESSION analysis ,LEAST squares ,POISSON regression ,LOGISTIC regression analysis - Abstract
Investigators that seek to employ regression analysis usually encounter the problem of multicollinearity with dependency on two or more explanatory variables. Multicollinearity is associated with unstable estimated coefficients and it results in high variances of the least squares estimators in linear regression models (LRMs). Thus the detection of collinearity is the compulsory first step in regression analysis. Multicollinearity also come out in generalized linear models (GLMs) and has same serious effects on the maximum likelihood estimates. The purposes of this paper are to propose new collinearity diagnostics criteria in GLMs in the context of both the maximum likelihood and ridge estimators, to examine the properties of new collinearity diagnostics via the ridge constant, to exemplify the theoretical results by numerical examples on Poisson, Binomial and Gamma responses. The effects of centering and scaling the information matrix on the sensitivity of the diagnostics in the presence of collinearity are also investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
28. Design and control of a 7 DOF redundant manipulator arm.
- Author
-
Kumar, Priyadarshi Biplab, Verma, Navneet Kumar, Parhi, Dayal R., and Priyadarshi, Deepayan
- Subjects
MANIPULATORS (Machinery) ,DEGREES of freedom ,LEAST squares ,ROBOTS ,KINEMATICS ,ROBOTICS - Abstract
With the development of science and technology, robots have become an integral part of human life. To ease human efforts in tedious and repetitive tasks, robots are used in human platforms. The current analysis is focused on the design and development of a robotic manipulator to perform pick and place like operations. First, the kinematic analysis is performed on different links of the designed manipulator to know its workspace and singularity. Each link of the manipulator is modelled in SOLIDWORKS software and simultaneously imported to V-REP software. In the V-REP software, links are grouped to form the manipulator arm. Each link is attached to a joint. After formation of the manipulator arm, simulations for different physical operations are performed. While performing different physical operations with the manipulator arm, graphical analysis is performed for position, velocity and acceleration of revolute joints. The force required for the prismatic joint to hold the object is also calculated. The results obtained from the simulation analysis reveal that the manipulator can work satisfactorily in the practical environment. Finally, the results are compared for manipulators of different degrees of freedom to obtain the workspace required for a smooth operation. The current study holds a large importance in robotics field as mobile manipulators are extensively used in different industries and the same analysis can also be extended towards other forms robots. Abbreviation: DLS: Damped Least Square DH: Denavit-Hartenberg V-REP: Virtual Robot Experimentation Platform IK: Inverse Kinematics [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
29. A Novel Hyperspectral Unmixing Method based on Least Squares Twin Support Vector Machines.
- Author
-
Wang, Liguo, Wang, San, Jia, Xiuping, and Bi, Tianyi
- Subjects
LEAST squares ,SUPPORT vector machines ,QUADRATIC programming ,HYPERPLANES - Abstract
In hyperspectral images, endmembers characterizing one class of ground object may vary due to illumination, weathering, slight variations of the materials. This phenomenon is called intra-class endmember variability which is one of the important factors affecting the performance of unmixing. However, intra-class endmember variability is often ignored in unmixing, which causes a decrease in the accuracy of unmixing. How to deal with intra-class endmember variability is the focus. To address this problem, we propose a novel hyperspectral unmixing method based on Least Squares Twin Support Vector Machines (ULSTWSVM). ULSTWSVM uses multiple training samples (endmembers) to model a pure class, which takes intra-class endmember variability into account in unmixing. At the same time, ULSTWSVM obtains abundances by calculating the distances from the mixed pixels to the classification hyperplanes, which is simple and efficient. ULSTWSVM mainly comprises three steps: (1) to obtain the two non-parallel classification hyperplanes by solving two quadratic programming problems (QPPs) in least squares sense, (2) to calculate distances from the mixed pixels to classification hyperplanes, and (3) to normalize the distances and convert them to abundances. Experimental results on both synthetic and real hyperspectral data show that the proposed method outperforms the methods used for comparison. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
30. Multilevel Inverter Based Power Quality Enhancement Using Improved Immune Control Algorithm.
- Author
-
Bansal, Praveen and Singh, Alka
- Subjects
ELECTRIC power filters ,ALGORITHMS ,HIGH voltages ,LEAST squares ,ADAPTIVE control systems ,PULSE width modulation transformers - Abstract
Multilevel inverters are currently the subject of extensive investigation and are used primarily for medium and high voltage distribution systems. The work presented in this paper provides a cost-effective and realistic solution to mitigate power quality (PQ) problems. A Cascaded H-Bridge multilevel inverter (CHB-MLI) has been configured as a shunt active power filter (SAPF). It is well reported that large-scale use of power electronics-based devices leads to several PQ problems such as poor power factor, unregulated dc voltage, harmonics, voltage regulation etc. In this paper, an improved immune feedback control algorithm is proposed and developed to mitigate PQ issues by predicting the weighted active component of load current and to generate reference current for the grid. Extensive MATLAB simulation and experimental work have been reported for validation on a single-phase system. A low prototype model is developed in the laboratory to study and compare the steady-state and dynamic results of the proposed controller with conventional Least Mean Square (LMS) and Immune algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
31. Geoid determination through the combined least-squares adjustment of GNSS/levelling/gravity networks – a case study in Linyi, China.
- Author
-
Guo, Dongmei and Xue, Zhixin
- Subjects
GEOID ,GRAVITY ,STOCHASTIC models ,PARAMETRIC modeling ,LEAST squares - Abstract
A detailed discussion of the adjustment problems used to combine GNSS/levelling/gravity network data is provided in this paper. The two primary problems inherent to heterogeneous data networks, namely, parametric models that describe the datums and systematic distortions among the available data sets and stochastic models that describe the observational residuals, are described. For parametric models, a relationship between the transformation parameters and the effects of datums and systematic distortions inherent among different height data types is established based on a least squares criterion. For stochastic models, the stochastic errors in GNSS/levelling/gravity data are evaluated, and a Helmert variance component estimation approach is introduced to refine weighting models. Finally, the proposed model is applied to determine the hybrid geoid in Linyi, China. The numerical results validate the capability and effectiveness of the proposed combined adjustment technique for hybrid geoid computations, revealing an achievable external accuracy of ±1.22 cm compared with GNSS/levelling measurements, which can be increased by 0.44 cm compared with classic adjustments of GNSS/levelling/geoid height data. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
32. Semiparametric inferences for panel data models with fixed effects via nearest neighbor difference transformation.
- Author
-
Xu, Qiuhua, Cai, Zongwu, and Fang, Ying
- Subjects
FIXED effects model ,PANEL analysis ,MONTE Carlo method ,LEAST squares ,DATA modeling - Abstract
In this paper, we propose a simple method to estimate a partially varying-coefficient panel data model with fixed effects. By taking difference upon the nearest neighbor of the smoothing variables to remove the fixed effects, we employ the profile least squares method and local linear fitting to estimate the parametric and nonparametric parts, respectively. Moreover, a functional form specification test and a nonparametric Hausman type test are constructed and their asymptotic properties are derived. Monte Carlo simulations are conducted to examine the finite sample performance of our estimators and test statistics. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
33. Functional directed graphical models and applications in root-cause analysis and diagnosis.
- Author
-
Gómez, Ana María Estrada, Paynabar, Kamran, and Pacella, Massimo
- Subjects
DIAGNOSIS ,DIRECTED graphs ,RANDOM variables ,LEAST squares ,ALGORITHMS - Abstract
Directed graphical models aim to represent the probabilistic relationships between variables in a system. Learning a directed graphical model from data includes parameter learning and structure learning. Several methods have been developed for directed graphical models with scalar variables. However, the case in which the variables are infinite-dimensional has not been studied thoroughly. Nowadays, in many applications, the variables are infinite-dimensional signals that need to be treated as functional random variables. This article proposes a novel method to learn directed graphical models in the functional setting. When the structure of the graph is known, function-to-function linear regression is used to estimate the parameters of the graph. When the goal is to learn the structure, a penalized least square loss function with a group LASSO penalty, for variable selection, and an L
2 penalty, to handle group selection of nodes, is defined. Cyclic coordinate accelerated proximal gradient descent algorithm is employed to minimize the loss function and learn the structure of the directed graph. Through simulations and a case study, the advantage of the proposed method is proven. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
34. Determination of singular value truncation threshold for regularization in ill-posed problems.
- Author
-
Duan, Shuyong, Yang, Botao, Wang, Fang, and Liu, Guirong
- Subjects
RADON transforms ,PROBLEM solving ,LEAST squares ,REGULARIZATION parameter ,INVERSE problems ,INDEX numbers (Economics) ,SINGULAR value decomposition - Abstract
Appropriate regularization parameter specification is the linchpin for solving ill-posed inverse problems when regularization method is applied. This paper presents a novel technique to determine cut off singular values in the truncated singular value decomposition (TSVD) methods. Simple formulae are presented to calculate the index number of the singular value, beyond which all the smaller singular values and the corresponding vectors are truncated. The determination method of optimal truncation threshold is firstly theoretically inferred. Two-dimensional inverse problems processing Radon transform are then exemplified. Formulae to solve the problem with insufficient image resolution and projection angle number are derived by the currently proposed method. The results show that accuracy of the current method is similar to that of TSVD but with much superior efficiency. On the other hand, insufficiency in input data affects the output accuracy of the inverse solution, a least square method can be engaged to establish formulae calculating the truncation threshold. For an insufficient set of input data, the percentage difference between inversely reconstructed signal and TSVD reconstructed signal is about 3%. The current formulae offer reliable and more efficient approach to calculate the truncation threshold when TSVD is applied to solve inverse problems with known system characteristics. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. Numerical solution of two-dimensional Fredholm–Hammerstein integral equations on 2D irregular domains by using modified moving least-square method.
- Author
-
El Majouti, Z., El Jid, R., and Hajjaj, A.
- Subjects
LEAST squares ,MOMENTS method (Statistics) ,SPLINES - Abstract
In this work, we describe a numerical scheme based on modified moving least-square (MMLS) method for solving Fredholm–Hammerstein integral equations on 2D irregular domains. The moment matrix in moving least squares (MLS) method may be singular when the number of points in the local support domain is not enough. To overcome this problem, the MMLS method with non-singular moment matrix is used. The basic advantage of the proposed method does not require any adaptation of the nodal density in non-rectangular domain and the results converge more quickly to the exact solution. The error bound for the proposed method is provided. The new technique is examined in various integral equations and compared with the classical MLS method to show the accuracy and computational efficiency of the method. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
36. Iteratively reweighted least square for asymmetric L2-Loss support vector regression.
- Author
-
Zheng, Songfeng
- Subjects
LEAST squares ,LINEAR algebra - Abstract
In support vector regression (SVR) model, using the squared ϵ -insensitive loss function makes the objective function of the optimization problem strictly convex and yields a more concise solution. However, the formulation leads to a quadratic programing which is expensive to solve. This paper reformulates the optimization problem by absorbing the constraints in the objective function, and the new formulation shares similarity with weighted least square regression problem. Based on this formulation, we propose an iteratively reweighted least square approach to train the L
2 -loss SVR, for both linear and nonlinear models. The proposed approach is easy to implement, without requiring any additional computing package other than basic linear algebra operations. Numerical studies on real-world datasets show that, compared to the alternatives, the proposed approach can achieve similar prediction accuracy with substantially higher time efficiency. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
37. Parameter Estimation with Cumulative Errors.
- Author
-
Beck, J.V.
- Subjects
PROBABILITY measures ,LEAST squares - Abstract
Estimation of parameters is considered for several cases involving correlated errors. The cases include first and second order cumulative errors and a more general first order case. Estimators for first order cumulative errors are tabulated for five simple linear models. More general estimators for linear models are given in matrix form and it is demonstrated using maximum likelihood that many cases can be written simply in the form of differences. Expressions are also given that can be used to estimate the correlation coefficient and the error variance. Two examples are given to illustrate the new results. [ABSTRACT FROM AUTHOR]
- Published
- 1974
- Full Text
- View/download PDF
38. Application of PSO-LSSVM and hybrid programming to fault diagnosis of refrigeration systems.
- Author
-
Ren, Zhengxiong, Han, Hua, Cui, Xiaoyu, Qing, Hong, and Ye, Huiyun
- Subjects
FAULT diagnosis ,PARTICLE swarm optimization ,RELIABILITY in engineering ,SUPPORT vector machines ,REFRIGERATION & refrigerating machinery ,LEAST squares - Abstract
Fault detection and diagnosis (FDD) in refrigeration systems is of great importance for ensuring better equipment reliability and energy efficiency. Although numerous researches studied FDD algorithms and methodology, there is still a lack of mature commercial software in this field. This study presents a novel hybrid model by introducing particle swarm optimization (PSO) into least squares support vector machine (LSSVM) for parameter optimization to overcome the blindness of parameter selection, and proposes a novel idea of hybrid programming, where MATLAB is used to implement the FDD strategy and LabVIEW is employed for interface creation, to take the advantage of both sides. The hybrid programming is carried out through MATLAB script node, and an FDD platform for refrigeration systems is established. The strategy and the platform is validated using experimental data for a centrifugal chiller, where seven typical faults were investigated. The results show that the proposed PSO-LSSVM achieves an overall diagnostic accuracy of 99.70%, drastically improved (8.81%) from that of the LSSVM without optimization. The idea of hybrid programming is feasible for the establishment of an operable and highly integrated FDD platform with user-friendly interface and extendable functions. The practice of the idea also promotes the possibility of combining FDD with system control for better field applications. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
39. Investigation into the error compensation method of the surface form based on feed rate optimization in deterministic polishing.
- Author
-
Fan, Cheng, Xue, Yucheng, Zhang, Lei, Zhao, Qizhi, Lu, Yao, and Wang, Qian
- Subjects
GRINDING & polishing ,SURFACE roughness ,LEAST squares ,MATRIX multiplications ,FEED additives ,POINT processes ,ANIMAL feeds - Abstract
As the final step to fabricate the optical parts, the deterministic polishing process is usually used to reduce the roughness and correct the surface form. In this article, a new compensation method of the surface form error was proposed by optimizing the feed rate of the polishing process. The local material removal was described as the polished depth orthogonal to the tool path, and the local material removal model was also developed. Then, the linear algebraic expression of the global polished profile was derived by convoluting the local removal depth at each dwell point of the polishing process. In this model, the global polished depth matrix equals to the product of influential matrix and feed rate matrix. On the basis of the above, the error compensation method for the polishing process can be seen as an optimization process of the feed rate in polishing. The non-negative least square method was used to solve this problem. The effectiveness of the model proposed is verified by the polishing experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
40. Local polynomial method for frequency response function identification.
- Author
-
Xiguang, Dong, Xiaoyong, Guo, and Yanxiang, Wang
- Subjects
POLYNOMIALS ,LEAST squares ,IDENTIFICATION ,DISCRETE Fourier transforms - Abstract
Here we propose the local polynomial method to solve the problem of estimating the frequency response function in the linear system. Compared with other nonparametric identification methods based on the windowing strategies, this new identification method can be remarkably efficient in reducing the effect caused by the leakage error when the discrete Fourier transform is used under a non-periodic input excited signal. Considering the constraints between the coefficients of the polynomials at neighbour frequencies, we modify the proposed local polynomial method to get one constrained local polynomial method. The modified local polynomial method reduces the mean square error of the frequency response function and the estimation of the frequency response function is identified by one multi-objective least squares criterion. Finally the simulation example results confirm the identification theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
41. A study on geographically weighted spatial autoregression models with spatial autoregressive disturbances.
- Author
-
Peng, Xiaozhi, Wu, Hecheng, and Ma, Ling
- Subjects
AUTOREGRESSION (Statistics) ,AUTOREGRESSIVE models ,LEAST squares - Abstract
Spatial heterogeneity and correlation are both considered in the geographical weighted spatial autoregressive model. At present, this kind of model has aroused the attention of some scholars. For the estimation of the model, the existing research is based on the assumption that the error terms are independent and identically distributed. In this article we use a computationally simple procedure for estimating the model with spatially autoregressive disturbance terms, both the estimates of constant coefficients and variable coefficients are obtained. Finally, we give the large sample properties of the estimators under some ordinary conditions. In addition, application study of the estimation methods involved will be further explored in a separate study. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
42. Change detection using least squares one-class classification control chart.
- Author
-
Maboudou-Tchao, Edgard M.
- Subjects
QUALITY control charts ,LEAST squares ,STATISTICAL process control ,SUPPORT vector machines ,VECTOR data - Abstract
One-class classification can be thought as a special type of two-class classification problem, where data only from one class, the target class, are available for training the classifier (referred to as one-class classifier). The problem of classifying positive (or target) cases in the absence of appropriately characterized negative cases (or outliers) has gained increasing attention in recent years. Several methods are available to solve the one-class classification problem. Three methods are commonly used: density estimation, boundary methods, and reconstruction methods. This paper focuses on boundary methods which include k–center method, nearest neighbor method, one-class support vector machine (OCSVM), and support vector data description (SVDD). In statistical process control (SPC), practitioners successfully used SVDD to detect anomalies or outliers in the process. In this paper, we reformulate the standard OCSVM by a least squares version of the method. This least squares one-class support vector machine (LS-OCSVM) is used to design a control chart for monitoring the mean vector of processes. We compare the performance of the LS-OCSVM chart with the SVDD and T 2 chart. The experimental results indicate that the proposed control chart has very good performances. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
43. Development of a linear mixed-effects individual-tree basal area increment model for masson pine in Hunan Province, South-central China.
- Author
-
Wang, Wenwen, Bai, Yanfeng, Jiang, Chunqian, Yang, Haijun, and Meng, Jinghui
- Subjects
PINE ,AKAIKE information criterion ,FOREST surveys ,FOREST reserves ,LEAST squares ,TREE growth ,PINACEAE - Abstract
An individual-tree basal area increment model was developed for masson pine based on 26276 observations of 13,138 trees in 987 sample plots from the 7th (2004), 8th (2009), and 9th (2014) Chinese National Forest Inventory in Hunan Province, South-central China. The model was built using a linear mixed-effects approach with sample plots included as random effects since the data have a hierarchical stochastic structure and biased estimates of the standard error of parameter estimates could be a consequence of applying ordinary least square (OLS) for regression. In addition, within-plot heteroscedasticity and autocorrelation were also considered. The final mixed-effects model was determined according to the Akaike information criterion (AIC), Bayesian information criterion (BIC), log-likelihood (Loglik), and the likelihoodratio test (LRT). The results revealed that initial diameter (DBH), the sum of the basal area (m
2 /ha) in trees with DBHs larger than the DBH of the subject tree (BAL), number of trees per hectare (NT), and elevation (EL) had a significant impact on individual-tree basal area increment. The mixed-effects model performed much better than the basic model produced using OLS. Additionally, the variance structure of the model errors was successfully modeled using the power function. However, the autocorrelation structures were not defined because there was no autocorrelation amongst the data. It is believed that the final model will contribute to the scientific management of the masson pine. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
44. Constrained and network multi-receiver single-epoch RTK positioning.
- Author
-
Bakuła, Mieczysław
- Subjects
REMOTE sensing ,MATHEMATICAL models ,EARTH sciences ,LEAST squares ,TREND analysis - Abstract
This work presents the concept of RTK positioning based on two or three rover GNSS receivers from two or three different reference stations. Additionally, detailed mathematical models of constrained least squares adjustments of RTK positioning based on two or three different RTK/GNSS receivers are presented. The models were tested on real RTK data using three rover Trimble RTK receivers and three reference stations of the ASG-EUPOS system. Practical calculations obtained from the adjustments showed improved accuracy over traditional RTK positioning when reference GNSS stations were located far away (about 30 km from the mobile RTK receivers). The maximum average absolute error for horizontal and vertical coordinates in constrained RTK positioning was over two times lower than in the single baseline RTK positioning, and reached 0.021 and 0.046 m, respectively. Since the concept of constrained and redundant, single-epoch RTK adjustment can be used for static or kinematic applications in real-time positioning, it can also be widely used in geoscience surveying and remote sensing. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
45. Time Series Seasonal Adjustment Using Regularized Singular Value Decomposition.
- Author
-
Lin, Wei, Huang, Jianhua Z., and McElroy, Tucker
- Subjects
SINGULAR value decomposition ,LEAST squares - Abstract
We propose a new seasonal adjustment method based on the Regularized Singular Value Decomposition (RSVD) of the matrix obtained by reshaping the seasonal time series data. The method is flexible enough to capture two kinds of seasonality: the fixed seasonality that does not change over time and the time-varying seasonality that varies from one season to another. RSVD represents the time-varying seasonality by a linear combination of several seasonal patterns. The right singular vectors capture multiple seasonal patterns, and the corresponding left singular vectors capture the magnitudes of those seasonal patterns and how they change over time. By assuming the time-varying seasonal patterns change smoothly over time, the RSVD uses penalized least squares with a roughness penalty to effectively extract the left singular vectors. The proposed method applies to seasonal time-series data with a stationary or nonstationary nonseasonal component. The method also has a variant that can handle the case that an abrupt change (i.e., break) may occur in the magnitudes of seasonal patterns. Our proposed method compares favorably with the state-of-art X-13ARIMA-SEATS program on both simulated and real data examples. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
46. Regularization in ultrasound tomography using projection-based regularized total least squares.
- Author
-
Almekkawy, Mohamed, Carević, Anita, Abdou, Ahmed, He, Jiayu, Lee, Geunseop, and Barlow, Jesse
- Subjects
LEAST squares ,MATHEMATICAL regularization ,TOMOGRAPHY ,KRYLOV subspace ,INVERSE problems ,BREAST ,PROBLEM solving - Abstract
Ultrasound Tomography (UT) is primarily used for the detection of malignant tissue in the human breast. However, the reconstruction algorithms used for UT require large computational time and are based upon solving a nonlinear, ill-posed inverse problem. We constructed and solved the inverse scattering problem from UT using the Distorted Born Iterative method. Since this problem is ill-posed, this paper focuses on optimizing the reconstruction method by analysing and selecting a better regularization algorithm to solve the inverse problem. The performance of two regularization algorithms, Truncated Total Least Squares (TTLS) and a Projection-Based Regularized Total Least Squares (PB-RTLS), are compared. The advantages of using PB-RTLS over TTLS are the dimension reduction of the problem being solved and the avoidance of the SVD calculation. These results in significant decrease of computational time. The dimension reduction is achieved by projecting the problem onto lower dimensional subspace, where the subspace is expanded dynamically by employing a generalized Krylov subspace expansion. In addition, PB-RTLS is avoiding the problem associated with finding the truncation parameter in TTLS since it has integrated parameter search. We proved using simulated and breast phantoms that PB-RTLS has lower relative error which results in better reconstructed images. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
47. Principal components estimator for measurement error models.
- Author
-
Üstündağ Şiray, Gülesen
- Subjects
ERRORS-in-variables models ,MEASUREMENT errors ,MONTE Carlo method ,LEAST squares - Abstract
In this paper, we carry out the principal components regression approach to the measurement error models. We introduce the principal components estimator and then the restricted principal components estimator by combining the approaches principal components regression estimator and restricted least squares estimator for the measurement error models, when the reliability matrix known and unknown, separately. We investigate the asymptotic properties and matrix mean squared error performances of the new estimators. Also, we conduct a Monte Carlo simulation study and a numerical example to investigate the performances of the proposed estimators by the scalar mean squared error criterion. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
48. Determination method of scaling laws based on least square method and applied to rectangular thin plates and rotor-bearing systems.
- Author
-
Zhang, Wen Di, Luo, Zhong, Ge, Xiao Biao, Zhang, Yong Qiang, and Guo, Si Wei
- Subjects
LEAST squares ,RECTANGULAR plates (Engineering) ,FINITE element method ,SIMULATION software ,SIMULATION methods & models - Abstract
In this study, we investigate the dynamic scaling laws of geometrically similitude models and systems for predicting their vibration characteristics accurately. A determination method of scaling laws based on least square method for calculating weighted powers of scaling factors is proposed for the first time. Taking geometric parameters as input (design) parameters and vibration characteristics parameters as output parameters, the weighted powers of scaling factors are calculated by least squares similitude method (LSSM) with several design models, and then scaling factors of the output parameters are obtained by combining weighted powers and corresponding scaling factors. Applicability of the LSSM is verified in the following two cases with rectangular plate and rotor-bearing system that the stiffness of supports is taken into account. The vibration characteristics are calculated by using finite element method in MATLAB and compared with simulation analysis software ANSYS. As a result, stable weighted powers and good predictions are obtained for two cases. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
49. A comparison of two reflectivity parametrizations in acoustic least-squares reverse time migration.
- Author
-
Lu, Yongming and Liu, Qiancheng
- Subjects
REFLECTANCE ,DEFINITIONS ,ACOUSTIC stimulation - Abstract
In comparison with conventional reverse time migration (RTM), least-squares RTM (LSRTM) can improve imaging resolution and compensate irregular illumination caused by acquisition geometry and complex structures. Since proposed, as an advanced version of RTM, it has been applied a lot to improve resolution and balance amplitude in imaging. Generally, in LSRTM, there are two kinds of LSRTM reflectivity models: velocity perturbation related reflectivity model and normal-incidence reflection coefficient related reflectivity model. Each has its specific physical meaning and provides different inverted results. In this paper, we first give a brief review about the two different definitions. Then, we compare the differences of these two methods and build a mathematical relationship. In the definition related to reflection coefficient model, we rescale the defined reflectivity with background velocity. Also, in source wavefield reconstruction, we use an effective trick by moving file pointer to fetch data from disk to reduce the memory cost. Finally, we test these two LSRTM schemes using the Marmousi model. We observe that the two inverted reflectivities are different, although they both image the subsurface discontinuities well. We furthermore extract traces from the inversion results and compare them with the true reflectivity models, respectively, to verify the physical definitions of them. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
50. Intelligent Computational Schemes for Designing more Seismic Damage-Tolerant Structures.
- Author
-
Azizsoltani, Hamoon and Haldar, Achintya
- Subjects
COMPUTATIONAL intelligence ,EARTHQUAKE resistant design ,KRIGING ,MONTE Carlo method ,RANDOM vibration ,LEAST squares ,FACTORIAL experiment designs ,IMPLICIT functions - Abstract
A novel concept of multiple deterministic analyses is proposed to design safer and more damage-tolerant structures for seismic excitation. The underlying risk is estimated to compare design alternatives. The basic response surface method is significantly improved to approximately generate implicit performance functions explicitly. Using the advanced factorial design, moving least squares, and Kriging methods, nine alternatives are proposed and verified using Monte Carlo simulations. They correctly identified and correlated the damaged states of structural elements using only few hundreds of deterministic analyses. The authors believe that they proposed alternatives to the random vibration approach and Monte Carlo simulation. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.