305 results on '"Non-linear least squares"'
Search Results
2. Accurate free‐water estimation in white matter from fast diffusion MRI acquisitions using the spherical means technique
- Author
-
Rodrigo de Luis-García, Santiago Aja-Fernández, Guillem París, and Antonio Tristán-Vega
- Subjects
White matter ,Mathematical analysis ,Normal Distribution ,Free water ,Partial volume ,Brain ,Water ,Thermal diffusivity ,computer.software_genre ,White Matter ,Regression ,Diffusion MRI ,Diffusion Magnetic Resonance Imaging ,Voxel ,Non-linear least squares ,Image Processing, Computer-Assisted ,Radiology, Nuclear Medicine and imaging ,Fraction (mathematics) ,Spherical means ,22 Física ,Diffusion (business) ,computer ,Mathematics - Abstract
Producción Científica, To accurately estimate the partial volume fraction of free water in the white matter from diffusion MRI acquisitions not demanding strong sensitizing gradients and/or large collections of different b-values. Data sets considered comprise~32- 64 gradients near b =1000 s/mm2 plus~6 gradients near b =500 s/mm2.Theory and Methods: The spherical means of each diffusion MRI set with the same b- value are computed. These means are related to the inherent diffusion parameters within the voxel (free- and cellular- water fractions; cellular- water diffusivity), which are solved by constrained nonlinear least squares regression.Results: The proposed method outperforms those based on mixtures of two Gaussians for the kind of data sets considered. W.r.t. the accuracy, the former does not introduce significant biases in the scenarios of interest, while the latter can reach a bias of 5%– 7% if fiber crossings are present. W.r.t. the precision, a variance near 10%, compared to 15%, can be attained for usual configurations.Conclusion: It is possible to compute reliable estimates of the free- water fraction inside the white matter by complementing typical DTI acquisitions with few gradients at a lowb- value. It can be done voxel- by- voxel, without imposing spatial regularity constraints., Ministerio de Ciencia e Innovación, (grant RTI2018- 094569- B- I00)
- Published
- 2021
3. rTPC and nls.multstart : A new pipeline to fit thermal performance curves in <scp>r</scp>
- Author
-
Daniel Padfield, Samraat Pawar, and Hannah O'Sullivan
- Subjects
0106 biological sciences ,Bootstrapping ,Computer science ,010604 marine biology & hydrobiology ,Ecological Modeling ,Model selection ,Pipeline (computing) ,Process (computing) ,010603 evolutionary biology ,01 natural sciences ,Least squares ,Non-linear least squares ,Curve fitting ,Range (statistics) ,Algorithm ,Ecology, Evolution, Behavior and Systematics - Abstract
1. The quantification of thermal performance curves (TPCs) for biological rates has many applications to problems such as predicting species9 responses to climate change. There is currently no widely used open-source pipeline to fit mathematical TPC models to data, which limits the transparency and reproducibility of the curve fitting process underlying applications of TPCs. 2. We present a new pipeline in R that currently allows for reproducible fitting of 24 different TPC models using non-linear least squares (NLLS) regression. The pipeline consists of two packages - rTPC and nls.multstart - that allow multiple start values for NLLS fitting and provides helper functions for setting start parameters. This pipeline overcomes previous problems that have made NLLS fitting and estimation of key parameters difficult or unreliable. 3. We demonstrate how rTPC and nls.multstart can be combined with other packages in R to robustly and reproducibly fit multiple models to multiple TPC datasets at once. In addition, we show how model selection or averaging, weighted model fitting, and bootstrapping can easily be implemented within the pipeline. 4. This new pipeline provides a flexible and reproducible approach that makes the challenging task of fitting multiple TPC models to data accessible to a wide range of users.
- Published
- 2021
4. Common correlated effect cross‐sectional dependence corrections for nonlinear conditional mean panel models
- Author
-
Sinem Hacioglu Hoke and George Kapetanios
- Subjects
Economics and Econometrics ,05 social sciences ,Monte Carlo method ,Asymptotic distribution ,Estimator ,Conditional expectation ,Nonlinear system ,Consistency (statistics) ,Non-linear least squares ,0502 economics and business ,Statistics ,Econometrics ,050207 economics ,Social Sciences (miscellaneous) ,050205 econometrics ,Panel data ,Mathematics - Abstract
This paper provides an approach to estimation and inference for nonlinear conditional mean panel data models, in the presence of cross‐sectional dependence. We modify Pesaran's (Econometrica, 2006, 74(4), 967–1012) common correlated effects correction to filter out the interactive unobserved multifactor structure. The estimation can be carried out using nonlinear least squares, by augmenting the set of explanatory variables with cross‐sectional averages of both linear and nonlinear terms. We propose pooled and mean group estimators, derive their asymptotic distributions, and show the consistency and asymptotic normality of the coefficients of the model. The features of the proposed estimators are investigated through extensive Monte Carlo experiments. We also present two empirical exercises. The first explores the nonlinear relationship between banks' capital ratios and riskiness. The second estimates the nonlinear effect of national savings on national investment in OECD countries depending on countries' openness.
- Published
- 2020
5. Fast Nonlinear Least Squares Optimization of Large‐Scale Semi‐Sparse Problems
- Author
-
Aurel Gruber, Derek Bradley, Marco Fratarcangeli, Gaspard Zoss, and Thabo Beeler
- Subjects
Optimization problem ,Computer science ,Linear system ,020207 software engineering ,02 engineering and technology ,Solver ,Computer Graphics and Computer-Aided Design ,Nonlinear system ,Non-linear least squares ,0202 electrical engineering, electronic engineering, information engineering ,Schur complement ,Linear problem ,020201 artificial intelligence & image processing ,Algorithm - Abstract
Many problems in computer graphics and vision can be formulated as a nonlinear least squares optimization problem, for which numerous off-the-shelf solvers are readily available. Depending on the structure of the problem, however, existing solvers may be more or less suitable, and in some cases the solution comes at the cost of lengthy convergence times. One such case is semi-sparse optimization problems, emerging for example in localized facial performance reconstruction, where the nonlinear least squares problem can be composed of hundreds of thousands of cost functions, each one involving many of the optimization parameters. While such problems can be solved with existing solvers, the computation time can severely hinder the applicability of these methods. We introduce a novel iterative solver for nonlinear least squares optimization of large-scale semi-sparse problems. We use the nonlinear Levenberg-Marquardt method to locally linearize the problem in parallel, based on its firstorder approximation. Then, we decompose the linear problem in small blocks, using the local Schur complement, leading to a more compact linear system without loss of information. The resulting system is dense but its size is small enough to be solved using a parallel direct method in a short amount of time. The main benefit we get by using such an approach is that the overall optimization process is entirely parallel and scalable, making it suitable to be mapped onto graphics hardware (GPU). By using our minimizer, results are obtained up to one order of magnitude faster than other existing solvers, without sacrificing the generality and the accuracy of the model. We provide a detailed analysis of our approach and validate our results with the application of performance-based facial capture using a recently-proposed anatomical local face deformation model.
- Published
- 2020
6. Evaluation of electrical characteristics of biological tissue with electrical impedance spectroscopy
- Author
-
Wang Li, Hongtao Wu, Jiafeng Yao, Jingshi Huang, Hao Wang, Jianping Li, and Kai Liu
- Subjects
Hot Temperature ,Materials science ,Eggs ,Clinical Biochemistry ,02 engineering and technology ,Models, Biological ,01 natural sciences ,Biochemistry ,Analytical Chemistry ,Fitting algorithm ,Electric Impedance ,Animals ,Computer Simulation ,Least-Squares Analysis ,Electrical impedance spectroscopy ,Computer simulation ,Cell Membrane ,010401 analytical chemistry ,Impedance spectrum ,Biological tissue ,021001 nanoscience & nanotechnology ,0104 chemical sciences ,Dielectric Spectroscopy ,Non-linear least squares ,Equivalent circuit ,0210 nano-technology ,Heating time ,Biological system ,Chickens ,Algorithms - Abstract
The electrical characteristics of biological tissue is evaluated by establishing an electrical equivalent circuit with electrical impedance spectroscopy. The least squares method is used to realize electrical equivalent circuit fitting by using the developed portable electrical impedance spectroscopy system. The EIS system is used to obtain the impedance spectrum data of the measured biological tissue. In the experiment, the impedance spectrum data of eggs under different heating time were measured, and the established equivalent circuit model of eggs was fitted by nonlinear least squares fitting algorithm. Moreover, the electrical characteristics of the biological tissue are also revealed by numerical simulation with HANAI model. The experimental and simulation results show that the extracted equivalent electrical parameters can clearly characterize the variation of the internal change of components of biological tissues.
- Published
- 2020
7. Uncertainty quantification in machine learning and nonlinear least squares regression models
- Author
-
John R. Kitchin and Ni Zhan
- Subjects
Environmental Engineering ,business.industry ,Computer science ,General Chemical Engineering ,Non-linear least squares ,Regression analysis ,Artificial intelligence ,Uncertainty quantification ,business ,Machine learning ,computer.software_genre ,computer ,Biotechnology - Published
- 2021
8. Using performance reference compounds to compare mass transfer calibration methodologies in passive samplers deployed in the water column
- Author
-
Abigail Sajor Joyce and Robert M. Burgess
- Subjects
Reproducibility ,010504 meteorology & atmospheric sciences ,Health, Toxicology and Mutagenesis ,Soil science ,010501 environmental sciences ,01 natural sciences ,Standard deviation ,Water column ,Non-linear least squares ,Mass transfer ,Calibration ,Environmental Chemistry ,Environmental science ,Diffusion (business) ,0105 earth and related environmental sciences ,Passive sampling - Abstract
Performance reference compounds (PRCs) are often added to passive samplers prior to field deployments to provide information about mass transfer kinetics between the sampled environment and the passive sampler. Their popularity has resulted in different methods of varying complexity to estimate mass transfer and better estimate freely dissolved concentrations (Cfree ) of targeted compounds. Three methods for describing a mass transfer model are commonly used: a first-order kinetic method, a nonlinear least squares fitting of sampling rate, and a diffusion method. Low-density polyethylene strips loaded with PRCs and of 4 different thicknesses were used as passive samplers to create an array of PRC results to assess the comparability and reproducibility of each of the methods. Samplers were deployed in the water column at 3 stations in New Bedford Harbor (MA, USA). Collected data allowed Cfree comparisons to be performed in 2 ways: 1) comparison of Cfree derived from one thickness using different methods, and 2) comparison of Cfree derived by the same method using different thicknesses of polyethylene. Overall, the nonlinear least squares and diffusion methods demonstrated the most precise results for all the PCBs measured and generated Cfree values that were often statistically indistinguishable. Relative standard deviations (RSDs) for total PCB measurements using the same thickness and varying model types ranged from 0.04 to 12% and increased with sampler thickness, and RSDs for estimates using the same method and varying thickness ranged from 8 to 18%. Environmental scientists and managers are encouraged to use these methods when estimating Cfree from passive sampling and PRC data. Environ Toxicol Chem 2018;37:2089-2097. Published 2018 Wiley Periodicals Inc. on behalf of SETAC. This article is a US government work and, as such, is in the public domain in the United States of America.
- Published
- 2018
9. Assessment of MR‐based and quantitative susceptibility mapping for the quantification of liver iron concentration in a mouse model at 7T
- Author
-
Gregory Simchick, Qun Zhao, Zhi Liu, Tamas Nagy, and May P. Xiong
- Subjects
Liver Iron Concentration ,Chromatography ,biology ,Chemistry ,Quantitative susceptibility mapping ,Magnetic susceptibility ,030218 nuclear medicine & medical imaging ,Ferritin ,03 medical and health sciences ,0302 clinical medicine ,Non-linear least squares ,biology.protein ,Radiology, Nuclear Medicine and imaging ,Serum ferritin ,Inductively coupled plasma mass spectrometry ,030217 neurology & neurosurgery ,Gradient echo - Abstract
Purpose To assess the feasibility of quantifying liver iron concentration (LIC) using R2* and quantitative susceptibility mapping (QSM) at a high field strength of 7 Tesla (T). Methods Five different concentrations of Fe-dextran were injected into 12 mice to produce various degrees of liver iron overload. After mice were sacrificed, blood and liver samples were harvested. Ferritin enzyme-linked immunosorbent assay (ELISA) and inductively coupled plasma mass spectrometry were performed to quantify serum ferritin concentration and LIC. Multiecho gradient echo MRI was conducted to estimate R2* and the magnetic susceptibility of each liver sample through complex nonlinear least squares fitting and a morphology enabled dipole inversion method, respectively. Results Average estimates of serum ferritin concentration, LIC, R2*, and susceptibility all show good linear correlations with injected Fe-dextran concentration; however, the standard deviations in the estimates of R2* and susceptibility increase with injected Fe-dextran concentration. Both R2* and susceptibility measurements also show good linear correlations with LIC (R2 = 0.78 and R2 = 0.91, respectively), and a susceptibility-to-LIC conversion factor of 0.829 ppm/(mg/g wet) is derived. Conclusion The feasibility of quantifying LIC using MR-based R2* and QSM at a high field strength of 7T is demonstrated. Susceptibility quantification, which is an intrinsic property of tissues and benefits from being field-strength independent, is more robust than R2* quantification in this ex vivo study. A susceptibility-to-LIC conversion factor is presented that agrees relatively well with previously published QSM derived results obtained at 1.5T and 3T.
- Published
- 2018
10. Improved liver R2* mapping by pixel-wise curve fitting with adaptive neighborhood regularization
- Author
-
Qianjin Feng, Xiaoyun Liu, Taigang He, Xinyuan Zhang, Yanqiu Feng, Wufan Chen, and Changqing Wang
- Subjects
Accuracy and precision ,Pixel ,Regularization (mathematics) ,Imaging phantom ,Standard deviation ,030218 nuclear medicine & medical imaging ,Root mean square ,03 medical and health sciences ,0302 clinical medicine ,Non-linear least squares ,Curve fitting ,Radiology, Nuclear Medicine and imaging ,Algorithm ,030217 neurology & neurosurgery ,Mathematics - Abstract
Purpose To improve liver R2* mapping by incorporating adaptive neighborhood regularization into pixel-wise curve fitting. Methods Magnetic resonance imaging R2* mapping remains challenging because of the serial images with low signal-to-noise ratio. In this study, we proposed to exploit the neighboring pixels as regularization terms and adaptively determine the regularization parameters according to the interpixel signal similarity. The proposed algorithm, called the pixel-wise curve fitting with adaptive neighborhood regularization (PCANR), was compared with the conventional nonlinear least squares (NLS) and nonlocal means filter-based NLS algorithms on simulated, phantom, and in vivo data. Results Visually, the PCANR algorithm generates R2* maps with significantly reduced noise and well-preserved tiny structures. Quantitatively, the PCANR algorithm produces R2* maps with lower root mean square errors at varying R2* values and signal-to-noise-ratio levels compared with the NLS and nonlocal means filter-based NLS algorithms. For the high R2* values under low signal-to-noise-ratio levels, the PCANR algorithm outperforms the NLS and nonlocal means filter-based NLS algorithms in the accuracy and precision, in terms of mean and standard deviation of R2* measurements in selected region of interests, respectively. Conclusions The PCANR algorithm can reduce the effect of noise on liver R2* mapping, and the improved measurement precision will benefit the assessment of hepatic iron in clinical practice. Magn Reson Med 80:792-801, 2018. © 2018 International Society for Magnetic Resonance in Medicine.
- Published
- 2018
11. On gradient-based optimization strategies for inverse problems in metal forming
- Author
-
Paul Steinmann, Kai Willner, Benjamin Söhngen, and P. Landkamer
- Subjects
Mathematical optimization ,Applied Mathematics ,Subroutine ,General Physics and Astronomy ,Context (language use) ,02 engineering and technology ,Inverse problem ,01 natural sciences ,010101 applied mathematics ,Identification (information) ,020303 mechanical engineering & transports ,0203 mechanical engineering ,Feature (computer vision) ,Non-linear least squares ,General Materials Science ,Shape optimization ,Minification ,0101 mathematics ,Mathematics - Abstract
In the context of metal forming, optimization issues typically lead to inverse problems with a least-squares minimization as the objective. Due to the nonlinearities in forming simulations, iterative optimization approaches have to be considered. Gradient-based solution strategies for two inverse problems are proposed and compared to each other. Firstly, the identification of elasto-plastic material parameters is regarded. Secondly, a recently developed approach for the determination of an optimal workpiece design is investigated. As a special feature, both approaches can be coupled in a non-invasive fashion to arbitrary external finite element software via subroutines.
- Published
- 2017
12. Comparing cross-country estimates of Lorenz curves using a Dirichlet distribution across estimators and datasets
- Author
-
Andrew C. Chang, Phillip Li, and Shawn M. Martin
- Subjects
Economics and Econometrics ,Gini coefficient ,05 social sciences ,Estimator ,Economic statistics ,Dirichlet distribution ,symbols.namesake ,Income distribution ,Generalized Dirichlet distribution ,Non-linear least squares ,0502 economics and business ,Statistics ,Econometrics ,symbols ,050207 economics ,Lorenz curve ,Social Sciences (miscellaneous) ,050205 econometrics ,Mathematics - Abstract
Summary Chotikapanich and Griffiths (Journal of Business and Economic Statistics, 2002, 20(2), 290–295) introduced the Dirichlet distribution to the estimation of Lorenz curves. This distribution naturally accommodates the proportional nature of income share data and the dependence structure between the shares. Chotikapanich and Griffiths fit a family of five Lorenz curves to one year of Swedish and Brazilian income share data using unconstrained maximum likelihood and unconstrained nonlinear least squares. We attempt to replicate the authors' results and extend their analyses using both constrained estimation techniques and five additional years of data. We successfully replicate a majority of the authors' results and find that some of their main qualitative conclusions also hold using our constrained estimators and additional data.
- Published
- 2017
13. Equivalency of Kinetic Schemes: Causes and an Analysis of Some Model Fitting Algorithms
- Author
-
Vsevolod D. Dergachev and Alexander I. Petrov
- Subjects
Surface (mathematics) ,Work (thermodynamics) ,010304 chemical physics ,Chemistry ,Organic Chemistry ,Experimental data ,Function (mathematics) ,010402 general chemistry ,01 natural sciences ,Biochemistry ,Synthetic data ,0104 chemical sciences ,Visualization ,Inorganic Chemistry ,Non-linear least squares ,0103 physical sciences ,Convergence (routing) ,Physical and Theoretical Chemistry ,Algorithm - Abstract
The same experimental data can often be equally well described by multiple mathematically equivalent kinetic schemes. In the present work, we investigate several model-fitting algorithms and their ability to distinguish between mechanisms and derive the correct kinetic parameters for several different reaction classes involving consecutive reactions. We have conducted numerical experiments using synthetic experimental data for six classes of consecutive reactions involving different combinations of first- and second-order processes. The synthetic data mimic time-dependent absorption data as would be obtained from spectroscopic investigations of chemical kinetic processes. The connections between mathematically equivalent solutions are investigated, and analytical expressions describing these connections are derived. Ten optimization algorithms based on nonlinear least squares methods are compared in terms of their computational cost and frequency of convergence to global solutions. Performance is discussed, and a preferred method is recommended. A response surface visualization technique of projecting five-dimensional data onto the three-dimensional search space of the minimal function values is developed.
- Published
- 2017
14. A comparative simulation study of bayesian fitting approaches to intravoxel incoherent motion modeling in diffusion-weighted MRI
- Author
-
Peter T. While
- Subjects
Estimation theory ,Bayesian probability ,Estimator ,Bayesian inference ,Least squares ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Approximation error ,Non-linear least squares ,Statistics ,Radiology, Nuclear Medicine and imaging ,Algorithm ,030217 neurology & neurosurgery ,Intravoxel incoherent motion ,Mathematics - Abstract
Purpose To assess the performance of various least squares and Bayesian modeling approaches to parameter estimation in intravoxel incoherent motion (IVIM) modeling of diffusion-weighted MRI data. Methods Simulated tissue models of different type (breast/liver) and morphology (discrete/continuous) were used to generate noisy data according to the IVIM model at several signal-to-noise ratios. IVIM parameter maps were generated using six different approaches, including full nonlinear least squares (LSQ), segmented least squares (SEG), Bayesian modeling with a Gaussian shrinkage prior (BSP) and Bayesian modeling with a spatial homogeneity prior (FBM), plus two modified approaches. Estimators were compared by calculating the median absolute percentage error and deviation, and median percentage bias. Results The Bayesian modeling approaches consistently outperformed the least squares approaches, with lower relative error and deviation, and provided cleaner parameter maps with reduced erroneous heterogeneity. However, a weakness of the Bayesian approaches was exposed, whereby certain tissue features disappeared completely in regions of high parameter uncertainty. Lower error and deviation were generally afforded by FBM compared with BSP, at the cost of higher bias. Conclusions Bayesian modeling is capable of producing more visually pleasing IVIM parameter maps than least squares approaches, but their potential to mask certain tissue features demands caution during implementation. Magn Reson Med 78:2373–2387, 2017. © 2017 International Society for Magnetic Resonance in Medicine.
- Published
- 2017
15. Exact and Approximate Statistical Inference for Nonlinear Regression and the Estimating Equation Approach
- Author
-
Eugene Demidenko
- Subjects
Statistics and Probability ,Mathematical optimization ,021103 operations research ,Cumulative distribution function ,0211 other engineering and technologies ,Estimator ,Regression analysis ,02 engineering and technology ,Estimating equations ,01 natural sciences ,010104 statistics & probability ,Nonlinear system ,Non-linear least squares ,Applied mathematics ,0101 mathematics ,Statistics, Probability and Uncertainty ,Nonlinear regression ,Random variable ,Mathematics - Abstract
The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.
- Published
- 2017
16. Parameter estimation using weighted total least squares in the two-compartment exchange model
- Author
-
Anders Garpebring and Tommy Löfstedt
- Subjects
Mathematical optimization ,Calibration (statistics) ,Estimation theory ,Estimator ,030218 nuclear medicine & medical imaging ,Normal distribution ,03 medical and health sciences ,0302 clinical medicine ,Residual sum of squares ,Non-linear least squares ,Applied mathematics ,Radiology, Nuclear Medicine and imaging ,Total least squares ,030217 neurology & neurosurgery ,Linear least squares ,Mathematics - Abstract
PurposeThe linear least squares (LLS) estimator provides a fast approach to parameter estimation in the linearized two-compartment exchange model. However, the LLS method may introduce a bias throu ...
- Published
- 2017
17. Bias Compensation for Rational Function Model Based on Total Least Squares
- Author
-
Ting Jiang, Anzhu Yu, Gangwu Jiang, Xiangpo Wei, Wenyue Guo, and Yi Zhang
- Subjects
010504 meteorology & atmospheric sciences ,0211 other engineering and technologies ,Explained sum of squares ,02 engineering and technology ,Generalized least squares ,Rational function ,01 natural sciences ,Least squares ,Computer Science Applications ,Polynomial and rational function modeling ,Non-linear least squares ,Statistics ,Earth and Planetary Sciences (miscellaneous) ,Errors-in-variables models ,Computers in Earth Sciences ,Total least squares ,Engineering (miscellaneous) ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Mathematics - Published
- 2017
18. A nonlinear least-squares-based ensemble method with a penalty strategy for computing the conditional nonlinear optimal perturbations
- Author
-
Xiaobing Feng and Xiangjun Tian
- Subjects
Atmospheric Science ,Mathematical optimization ,010504 meteorology & atmospheric sciences ,Iterative method ,Numerical analysis ,0208 environmental biotechnology ,MathematicsofComputing_NUMERICALANALYSIS ,Constrained optimization ,02 engineering and technology ,Unconstrained optimization ,01 natural sciences ,020801 environmental engineering ,Burgers' equation ,Nonlinear system ,Non-linear least squares ,Penalty method ,0105 earth and related environmental sciences ,Mathematics - Abstract
Since the conditional nonlinear optimal perturbations (CNOPs) are formulated mathematically as a constrained minimization problem, the existing numerical methods for computing the CNOPs were designed to solve the original constrained optimization problems by directly adapting some (constrained) optimization algorithms. Although such an approach is natural and convenient, it often results in inefficient and expensive numerical methods for computing the CNOPs. In this paper we propose an alternative approach which is based on the idea of first transforming the original CNOP problem into a special unconstrained optimization problem (known as a nonlinear least squares problem) using a penalty strategy and then utilizing a more efficient Gauss-Newton iterative method, to solve the transformed (unconstrained) nonlinear least squares problem. Compared with the existing numerical methods, the proposed Gauss-Newton iterative method has some desired advantages: it is easy to implement and portable because it does not rely on any existing constrained optimization package, it is also fast, and highly efficient. These advantages and potential merits of the proposed method are demonstrated in the paper by several sets of evaluation experiments based on the viscous Burgers equation, the T21L3 QG model and the shallow-water equation model, respectively.
- Published
- 2016
19. Parameter identification of dc black-box arc model using non-linear least squares
- Author
-
Kyu-Hoon Park, Mansoor Asif, Ho-Yun Lee, and Bang-Wook Lee
- Subjects
Computer science ,optimisation ,020209 energy ,ac arc characteristics ,dc arc feasibility ,Energy Engineering and Power Technology ,02 engineering and technology ,Fault (power engineering) ,Arc (geometry) ,circuit-breaking arcs ,Control theory ,Black box ,0202 electrical engineering, electronic engineering, information engineering ,simulate dynamic arc-circuit interaction ,Waveform ,dc arc current waveform ,Circuit breaker ,Parametric statistics ,020208 electrical & electronic engineering ,General Engineering ,DC circuit breakers ,Schwarz model parameters ,power system faults ,DC arc analysis ,black-box modelling ,lcsh:TA1-2040 ,Non-linear least squares ,black-box arc model ,DC arc characteristics ,black-box model ,circuit breakers ,lcsh:Engineering (General). Civil engineering (General) ,Software ,Datasheet - Abstract
The black-box arc model is known to be a useful technique to simulate dynamic arc-circuit interaction by reflecting arc characteristics. Existing researches have shown that the black-box model has been widely used to analyse the ac arc characteristics of SF(6) and air circuit breakers. Due to the enormous energy and steep rise rate (di/dt) during dc fault, it is important to consider dc arc characteristics. However, there are no examples of black-box models for dc circuit breakers utilised in railway systems and dc microgrid. In this study, the applicability of the black-box model, which was applied to the existing ac arc analysis, was verified for the dc arc analysis. Black-box modelling is applied to datasheet of industrial low-voltage circuit breakers and parametric sweep method was used to select Schwarz model parameters considering the tendency of dc arc current waveform for the dc pole-to-pole fault. The authors also applied the Levenberg–Marquardt algorithm, which is the most extensively used for the optimisation of functional parameters, to the Schwarz model for accurate and reliable arc modelling. As a result, the dc arc feasibility of the black-box model was analysed through simulation results, and a model optimisation technique was proposed.
- Published
- 2019
20. Diagnostics of calibration methods: model adequacy of UV-based determinations
- Author
-
Omer Utku Erzengin and A. Hakan Aktaş
- Subjects
Coefficient of determination ,Applied Mathematics ,010401 analytical chemistry ,Deviance (statistics) ,01 natural sciences ,0104 chemical sciences ,Analytical Chemistry ,010104 statistics & probability ,Residual sum of squares ,Non-linear least squares ,Principal component analysis ,Ordinary least squares ,Statistics ,Partial least squares regression ,0101 mathematics ,Algorithm ,Linear least squares ,Mathematics - Abstract
Models such as ordinary least squares, independent component analysis, principle component analysis, partial least squares, and artificial neural networks can be found in the calibration literature. Linear or nonlinear methods can be used to explain the structure of the same phenomenon. Each type of model has its own advantages with respect to the other. These methods are usually grouped taxonomically, but different models can sometimes be applied to the same data set. Taxonomically, ordinary least square and artificial neural network use completely different analytical procedures but are occasionally applied to the same data set. The aim of the study of methodological superiority is to compare the residuals of models because the model with the minimum error is preferred in real analyses. Calibration models, in general, are based on deterministic and stochastic parts; in other words, the data are equal to the model + the error. Explaining a model solely using statistics such as the coefficient of determination or its related significance values is sometimes inadequate. The errors of a model, also called its residuals, must have minimum variance compared to its alternatives. Additionally, the residuals must be unpredictable, uncorrelated, and symmetric. Under these conditions, the model can be considered adequate. In this study, calibration methods were applied to the raw materials, hydrochlorothiazide and amiloride hydrochloride, of a drug, as well as a sample of the drug tablet. The applied chemical procedure was fast, simple, and reproducible. The various linear and nonlinear calibration methods mentioned above were applied, and the adequacy of the calibration methods was compared according to their residuals.
- Published
- 2016
21. Comparison of linear and nonlinear implementation of the compartmental tissue uptake model for dynamic contrast-enhanced MRI
- Author
-
Julia A. Schnabel, Michael A. Chappell, Jesper F. Kallehauge, Kari Tanderup, Benjamin Irving, and Steven Sourbron
- Subjects
Accuracy and precision ,Speedup ,Computer science ,Linear model ,030218 nuclear medicine & medical imaging ,Upsampling ,03 medical and health sciences ,Noise ,Nonlinear system ,0302 clinical medicine ,Sampling (signal processing) ,030220 oncology & carcinogenesis ,Non-linear least squares ,Statistics ,Radiology, Nuclear Medicine and imaging ,Algorithm - Abstract
Purpose Fitting tracer kinetic models using linear methods is much faster than using their nonlinear counterparts, although this comes often at the expense of reduced accuracy and precision. The aim of this study was to derive and compare the performance of the linear compartmental tissue uptake (CTU) model with its nonlinear version with respect to their percentage error and precision. Theory and Methods The linear and nonlinear CTU models were initially compared using simulations with varying noise and temporal sampling. Subsequently, the clinical applicability of the linear model was demonstrated on 14 patients with locally advanced cervical cancer examined with dynamic contrast-enhanced magnetic resonance imaging. Results Simulations revealed equal percentage error and precision when noise was within clinical achievable ranges (contrast-to-noise ratio >10). The linear method was significantly faster than the nonlinear method, with a minimum speedup of around 230 across all tested sampling rates. Clinical analysis revealed that parameters estimated using the linear and nonlinear CTU model were highly correlated (ρ ≥ 0.95). Conclusion The linear CTU model is computationally more efficient and more stable against temporal downsampling, whereas the nonlinear method is more robust to variations in noise. The two methods may be used interchangeably within clinical achievable ranges of temporal sampling and noise. Magn Reson Med 77:2414–2423, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
- Published
- 2016
22. Nonlinear least squares regression for single image scanning electron microscope signal-to-noise ratio estimation
- Author
-
Syafiq Norhisham and Kok Swee Sim
- Subjects
0301 basic medicine ,Histology ,Scanning electron microscope ,Autocorrelation ,Analytical chemistry ,Neighbourhood (graph theory) ,02 engineering and technology ,021001 nanoscience & nanotechnology ,Regression ,Pathology and Forensic Medicine ,03 medical and health sciences ,symbols.namesake ,030104 developmental biology ,Signal-to-noise ratio (imaging) ,Gaussian noise ,Non-linear least squares ,symbols ,0210 nano-technology ,Algorithm ,Interpolation ,Mathematics - Abstract
A new method based on nonlinear least squares regression (NLLSR) is formulated to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. The estimation of SNR value based on NLLSR method is compared with the three existing methods of nearest neighbourhood, first-order interpolation and the combination of both nearest neighbourhood and first-order interpolation. Samples of SEM images with different textures, contrasts and edges were used to test the performance of NLLSR method in estimating the SNR values of the SEM images. It is shown that the NLLSR method is able to produce better estimation accuracy as compared to the other three existing methods. According to the SNR results obtained from the experiment, the NLLSR method is able to produce approximately less than 1% of SNR error difference as compared to the other three existing methods.
- Published
- 2016
23. Improved Time of Arrival measurement model for non‐convex optimization
- Author
-
Juri Sidorenko, Leo R. Ya. Doktorski, Volker Schatz, Norbert Scherer-Negenborn, Michael Arens, and Urs Hugentobler
- Subjects
Computer science ,05 social sciences ,Aerospace Engineering ,050801 communication & media studies ,020206 networking & telecommunications ,02 engineering and technology ,ddc ,Maxima and minima ,0508 media and communications ,Time of arrival ,GNSS applications ,Non-linear least squares ,Saddle point ,Convergence (routing) ,0202 electrical engineering, electronic engineering, information engineering ,Applied mathematics ,Quadratic programming ,Minification ,Electrical and Electronic Engineering - Abstract
The quadratic system provided by the Time of Arrival technique can be solved analytically or by nonlinear least squares minimization. An important problem in quadratic optimization is the possible convergence to a local minimum, instead of the global minimum. This problem does not occur for Global Navigation Satellite Systems (GNSS), due to the known satellite positions. In applications with unknown positions of the reference stations, such as indoor localization with self-calibration, local minima are an important issue. This article presents an approach showing how this risk can be significantly reduced. The main idea of our approach is to transform the local minimum to a saddle point by increasing the number of dimensions. In addition to numerical tests, we analytically prove the theorem and the criteria that no other local minima exist for nontrivial constellations.
- Published
- 2018
24. Inference in dynamic systems using B-splines and quasilinearized ODE penalties
- Author
-
Philippe Lambert, Gianluca Frasso, and Jonathan Jaeger
- Subjects
0301 basic medicine ,Statistics and Probability ,Mathematical optimization ,MathematicsofComputing_NUMERICALANALYSIS ,Ode ,Inference ,General Medicine ,01 natural sciences ,010104 statistics & probability ,03 medical and health sciences ,Nonlinear system ,Spline (mathematics) ,030104 developmental biology ,Non-linear least squares ,Ordinary differential equation ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Linear problem ,0101 mathematics ,Statistics, Probability and Uncertainty ,Smoothing ,Mathematics - Abstract
Nonlinear (systems of) ordinary differential equations (ODEs) are common tools in the analysis of complex one-dimensional dynamic systems. We propose a smoothing approach regularized by a quasilinearized ODE-based penalty. Within the quasilinearized spline-based framework, the estimation reduces to a conditionally linear problem for the optimization of the spline coefficients. Furthermore, standard ODE compliance parameter(s) selection criteria are applicable. We evaluate the performances of the proposed strategy through simulated and real data examples. Simulation studies suggest that the proposed procedure ensures more accurate estimates than standard nonlinear least squares approaches when the state (initial and/or boundary) conditions are not known.
- Published
- 2015
25. Seed coordinates of a new COMS-like 24 mm plaque verified using the FARO Edge
- Author
-
Keith M. Furutani, Stephen M. Corner, Sarah McCauley Cutsinger, and Renae M. Forsman
- Subjects
Brachytherapy ,Maximum deviation ,Edge (geometry) ,eye plaque brachytherapy ,Optics ,Radiation Oncology Physics ,Humans ,Radiology, Nuclear Medicine and imaging ,Dimethylpolysiloxanes ,Least-Squares Analysis ,Radiometry ,Melanoma ,Instrumentation ,Root-mean-square deviation ,Mathematics ,Task group ,Radiation ,business.industry ,Choroid Neoplasms ,Radiotherapy Planning, Computer-Assisted ,Plaque brachytherapy ,Reproducibility of Results ,COMS ,Equipment Design ,Nonlinear Dynamics ,Non-linear least squares ,episcleral plaque ,business - Abstract
A 24 mm COMS‐like eye plaque was developed to meet the treatment needs of our eye plaque brachytherapy practice. As part of commissioning, it was necessary to determine the new plaque's seed coordinates. The FARO Edge, a commercially available measurement arm, was chosen for this purpose. In order to validate the FARO Edge method, it was first used to measure the seed marker coordinates in the silastic molds for the standard 10, 18, and 20 mm COMS plaques, and the results were compared with the standard published Task Group 129 coordinates by a nonlinear least squares match in MATLAB version R2013a. All measured coordinates were within 0.60 mm, and root mean square deviation was 0.12, 0.23, and 0.35 mm for the 10, 18, and 20 mm molds, respectively. The FARO Edge was then used to measure the seed marker locations in the new 24 mm silastic mold. Those values were compared to the manufacturing specification coordinates and were found to demonstrate good agreement, with a maximum deviation of 0.56 mm and a root mean square deviation of 0.37 mm. The FARO Edge is deemed to be a reliable method for determining seed coordinates for COMS silastics, and the seed coordinates for the new 24 mm plaque are presented. PACS number: 87.53.Jw
- Published
- 2015
26. A General Nonlinear Least Squares Data Reconciliation and Estimation Method for Material Flow Analysis
- Author
-
Daniel Ralph, Julian M. Allwood, Grant M. Kopec, and Jonathan M. Cullen
- Subjects
Estimation ,Mathematical optimization ,Computer science ,Material flow analysis ,Nonlinear constrained optimization ,General Social Sciences ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Variable (computer science) ,Transformation (function) ,020401 chemical engineering ,Flow (mathematics) ,Non-linear least squares ,Sankey diagram ,Operations management ,0204 chemical engineering ,0105 earth and related environmental sciences ,General Environmental Science - Abstract
The extraction, transformation, use, and disposal of materials can be represented by directed, weighted networks, known in the material flow analysis (MFA) community as Sankey or flow diagrams. However, the construction of such networks is dependent on data that are often scarce, conflicting, or do not directly map onto a Sankey diagram. By formalizing the forms of data entry, a nonlinear constrained optimization program for data estimation and reconciliation can be formulated for reconciling data sets for MFA problems where data are scarce, in conflict, do not directly map onto a Sankey diagram, and are of variable quality. This method is demonstrated by reanalyzing an existing MFA of global steel flows, and the resulting analytical solution measurably improves upon their manual solution. [ABSTRACT FROM AUTHOR]
- Published
- 2015
27. Fitting the two-compartment model in DCE-MRI by linear inversion
- Author
-
Steven Sourbron, Daniel Lesnic, and Dimitra Flouri
- Subjects
Accuracy and precision ,Computer science ,Linear model ,Least squares ,030218 nuclear medicine & medical imaging ,Weighting ,03 medical and health sciences ,0302 clinical medicine ,Linear differential equation ,Non-linear least squares ,Temporal resolution ,Statistics ,Radiology, Nuclear Medicine and imaging ,Algorithm ,030217 neurology & neurosurgery ,Linear least squares - Abstract
Purpose: Model fitting of DCE-MRI data with non-linear least squares (NLLS) methods is slow and may be biased by the choice of initial values. The aim of this study was to develop and evaluate a linear least-squares (LLS) method to fit the two-compartment exchange and -filtration models. Methods: A second-order linear differential equation for the measured concentrations was derived where model parameters act as coefficients. Simulations of normal and pathological data were performed to determine calculation time, accuracy and precision under different noise levels and temporal resolutions. Performance of the LLS was evaluated by comparison against the NLLS. Results: The LLS method is about 200 times faster, which reduces the calculation times for a 256_256 MR slice from 9 min to 3 sec. For ideal data with low noise and high temporal resolution the LLS and NLLS were equally accurate and precise. The LLS was more accurate and precise than the NLLS at low temporal resolution, but less accurate at high noise levels. Conclusion: The data show that the LLS leads to a significant reduction in calculation times, and more reliable results at low noise levels. At higher noise levels the LLS becomes exceedingly inaccurate compared to the NLLS, but this may be improved by using a suitable weighting strategy.
- Published
- 2015
28. Polarized Raman spectroscopy for enhanced quantification of protein concentrations in an aqueous mixture
- Author
-
Sierin Lim, Clint Michael Perlaki, and Quan Liu
- Subjects
Reproducibility ,Aqueous solution ,Mean squared error ,Chemistry ,Quantitative proteomics ,Analytical chemistry ,Quantitative accuracy ,symbols.namesake ,Robustness (computer science) ,Non-linear least squares ,symbols ,General Materials Science ,Biological system ,Raman spectroscopy ,Spectroscopy - Abstract
Raman spectroscopy (RS) for selective quantification of protein species in mixed solutions holds enormous potential for advancing protein detection technology to significantly faster, cheaper, and less technically demanding platforms. However, even with powerful computational methods such as nonlinear least squares regression, protein quantification in such complex systems suffers from relatively poor accuracy, especially in comparison with established methods. In this work, a combination of the expanded set of spectral information provided by polarized Raman spectroscopy (PRS) that is otherwise unavailable in conventional RS was, to our knowledge, explored to enhance the quantitative accuracy and robustness of protein quantification for the first time. A mixture containing two proteins, lysozyme and α-amylase, was used as a model system to demonstrate enhanced quantitative accuracy and robustness of selective protein quantification using PRS. The concentrations of lysozyme and α-amylase in mixtures were estimated using data obtained from both traditional RS and PRS. A new method was developed to select highly sensitive peaks for accurate concentration estimation to take advantage of additional spectra offered by PRS. The root-mean squared errors (RMSE) of estimation using traditional RS and PRS were compared. A drastic improvement in RMSE was observed from traditional RS to PRS, where the RMSEs of α-amylase and lysozyme concentrations decreased by 11 and 7 times, respectively. Therefore, this technique is a successful demonstration in achieving greater accuracy and reproducibility in the estimation of protein concentration in a mixture, and it could play a significant role in future multiplexed protein quantification platforms. Copyright © 2015 John Wiley & Sons, Ltd.
- Published
- 2015
29. Estimation of atmospheric PSF parameters for hyperspectral imaging
- Author
-
Robert J. Plemmons, Sebastian Berisha, and James G. Nagy
- Subjects
Point spread function ,Mathematical optimization ,Algebra and Number Theory ,Applied Mathematics ,Hyperspectral imaging ,Regularization (mathematics) ,Separable space ,symbols.namesake ,Non-linear least squares ,Jacobian matrix and determinant ,symbols ,Projection (set theory) ,Algorithm ,Mathematics ,Variable (mathematics) - Abstract
Summary We present an iterative approach to solve separable nonlinear least squares problems arising in the estimation of wavelength-dependent point spread function parameters for hyperspectral imaging. A variable projection Gauss–Newton method is used to solve the nonlinear least squares problem. An analysis shows that the Jacobian can be potentially very ill conditioned. To deal with this ill conditioning, we use a combination of subset selection and other regularization techniques. Experimental results related to hyperspectral point spread function parameter identification and star spectrum reconstruction illustrate the effectiveness of the resulting numerical scheme. Copyright © 2015 John Wiley & Sons, Ltd.
- Published
- 2015
30. Neither fixed nor random: weighted least squares meta-analysis
- Author
-
Hristos Doucouliagos and T. D. Stanley
- Subjects
Statistics and Probability ,meta-analysis, meta-regression, weighted least squares, fixed effect, random effects ,Epidemiology ,Estimator ,Fixed effects model ,Generalized least squares ,Random effects model ,Markov Chains ,Bias ,Meta-Analysis as Topic ,Non-linear least squares ,Statistics ,Linear regression ,Confidence Intervals ,Humans ,Computer Simulation ,Least-Squares Analysis ,Total least squares ,Publication Bias ,Nonlinear regression ,Mathematics - Abstract
This study challenges two core conventional meta-analysis methods: fixed effect and random effects. We show how and explain why an unrestricted weighted least squares estimator is superior to conventional random-effects meta-analysis when there is publication (or small-sample) bias and better than a fixed-effect weighted average if there is heterogeneity. Statistical theory and simulations of effect sizes, log odds ratios and regression coefficients demonstrate that this unrestricted weighted least squares estimator provides satisfactory estimates and confidence intervals that are comparable to random effects when there is no publication (or small-sample) bias and identical to fixed-effect meta-analysis when there is no heterogeneity. When there is publication selection bias, the unrestricted weighted least squares approach dominates random effects; when there is excess heterogeneity, it is clearly superior to fixed-effect meta-analysis. In practical applications, an unrestricted weighted least squares weighted average will often provide superior estimates to both conventional fixed and random effects.
- Published
- 2015
31. Bayesian analysis of transverse signal decay with application to human brain
- Author
-
Richard G. Spencer, David A. Reiter, and Mustapha Bouhrara
- Subjects
Computer science ,Noise (signal processing) ,Estimation theory ,Non-linear least squares ,Statistics ,Bayesian probability ,Range (statistics) ,Radiology, Nuclear Medicine and imaging ,Relaxation (approximation) ,Algorithm ,Imaging phantom ,Exponential function - Abstract
Purpose Transverse relaxation analysis with several signal models has been used extensively to determine tissue and material properties. However, the derivation of corresponding parameter values is notoriously unreliable. We evaluate improvements in the quality of parameter estimation using Bayesian analysis and incorporating the Rician noise model, as appropriate for magnitude MR images. Theory and Methods Monoexponential, stretched exponential, and biexponential signal models were analyzed using nonlinear least squares (NLLS) and Bayesian approaches. Simulations and phantom and human brain data were analyzed using three different approaches to account for noise. Parameter estimation bias (reflecting accuracy) and dispersion (reflecting precision) were derived for a range of signal-to-noise ratios (SNR) and relaxation parameters. Results All methods performed well at high SNR. At lower SNR, the Bayesian approach yielded parameter estimates of considerably greater precision, as well as greater accuracy, than did NLLS. Incorporation of the Rician noise model greatly improved accuracy and, to a somewhat lesser extent, precision, in derived transverse relaxation parameters. Analyses of data obtained from solution phantoms and from brain were consistent with simulations. Conclusion Overall, estimation of parameters characterizing several different transverse relaxation models was markedly improved through use of Bayesian analysis and through incorporation of the Rician noise model. Magn Reson Med 74:785–802, 2015. © 2014 Wiley Periodicals, Inc.
- Published
- 2014
32. A self-guided search for good local minima of the sum-of-squared-error in nonlinear least squares regression
- Author
-
Frank Vogt
- Subjects
Polynomial regression ,Maxima and minima ,Nonlinear system ,Mathematical optimization ,Mean squared error ,Linearization ,Applied Mathematics ,Non-linear least squares ,Applied mathematics ,Probability density function ,Nonlinear regression ,Analytical Chemistry ,Mathematics - Abstract
Hard modeling of nonlinear chemical or biological systems is highly relevant as a model function together with values for model parameters provides insights in the systems' functionalities. Deriving values for said model parameters via nonlinear regression, however, is challenging as usually one of the numerous local minima of the sum-of-squared errors (SSEs) is determined; furthermore, for different starting points, different minima may be found. Thus, nonlinear regression is prone to low accuracy and low reproducibility. Therefore, there is a need for a generally applicable, automated initialization of nonlinear least squares algorithms, which reaches a good, reproducible solution after spending a reasonable computation time probing the SSE-hypersurface. For this purpose, a three-step methodology is presented in this study. First, the SSE-hypersurface is randomly probed in order to estimate probability density functions of initial model parameter that generally lead to an accurate fit solution. Second, these probability density functions then guide a high-resolution sampling of the SSE-hypersurface. This second probing focuses on those model parameter ranges that are likely to produce a low SSE. As the probing continues, the most appropriate initial guess is retained and eventually utilized in a subsequent nonlinear regression. It is shown that this “guided random search” derives considerably better regression solutions than linearization of model functions, which has so far been considered the best-case scenario. Examples from infrared spectroscopy, cell culture monitoring, reaction kinetics, and image analyses demonstrate the broad and successful applicability of this novel method. Copyright © 2014 John Wiley & Sons, Ltd.
- Published
- 2014
33. Kinetic analysis of hyperpolarized data with minimum a priori knowledge: Hybrid maximum entropy and nonlinear least squares method (MEM/NLS)
- Author
-
Erika Mariotti, Richard Southworth, Joel Dunn, Thomas R. Eykyn, and Mattia Veronese
- Subjects
Nuclear magnetic resonance ,Laplace inversion ,Chemistry ,Non-linear least squares ,Principle of maximum entropy ,Monte Carlo method ,Kinetic analysis ,A priori and a posteriori ,NLS ,Applied mathematics ,Radiology, Nuclear Medicine and imaging ,Hyperpolarization (physics) - Abstract
Purpose To assess the feasibility of using a hybrid Maximum-Entropy/Nonlinear Least Squares (MEM/NLS) method for analyzing the kinetics of hyperpolarized dynamic data with minimum a priori knowledge. Theory and Methods A continuous distribution of rates obtained through the Laplace inversion of the data is used as a constraint on the NLS fitting to derive a discrete spectrum of rates. Performance of the MEM/NLS algorithm was assessed through Monte Carlo simulations and validated by fitting the longitudinal relaxation time curves of hyperpolarized [1-13C] pyruvate acquired at 9.4 Tesla and at three different flip angles. The method was further used to assess the kinetics of hyperpolarized pyruvate-lactate exchange acquired in vitro in whole blood and to re-analyze the previously published in vitro reaction of hyperpolarized 15N choline with choline kinase. Results The MEM/NLS method was found to be adequate for the kinetic characterization of hyperpolarized in vitro time-series. Additional insights were obtained from experimental data in blood as well as from previously published 15N choline experimental data. Conclusion The proposed method informs on the compartmental model that best approximate the biological system observed using hyperpolarized 13C MR especially when the metabolic pathway assessed is complex or a new hyperpolarized probe is used. Magn Reson Med 73:2332–2342, 2015. © 2014 The authors. Magnetic Resonance in Medicine Published by Wiley Periodicals, Inc. on behalf of International Society of Medicine in Resonance.
- Published
- 2014
34. A supervised fitting approach to force field parametrization with application to the SIBFA polarizable force field
- Author
-
Markus Meuwly, Michael Devereux, Jean-Philip Piquemal, Nohad Gresh, Department of Chemistry, University of Basel, University of Basel (Unibas), Pharmacochimie Moléculaire et Cellulaire (PMC - UMR_S 648), Centre National de la Recherche Scientifique (CNRS)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Université Paris Descartes - Paris 5 (UPD5), Laboratoire de chimie théorique (LCT), Centre National de la Recherche Scientifique (CNRS)-Université Pierre et Marie Curie - Paris 6 (UPMC), Department of Chemistry [Basel], Université Paris Descartes - Paris 5 (UPD5)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre National de la Recherche Scientifique (CNRS), and Université Pierre et Marie Curie - Paris 6 (UPMC)-Institut de Chimie du CNRS (INC)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
Formamides ,Chemistry ,Imidazoles ,Water ,General Chemistry ,Force field (chemistry) ,Fitting Problems ,[CHIM.THEO]Chemical Sciences/Theoretical and/or physical chemistry ,Computational Mathematics ,Nonlinear system ,Classical mechanics ,Models, Chemical ,Polarizability ,Ab initio quantum chemistry methods ,Non-linear least squares ,Computer Simulation ,Magnesium ,Statistical physics ,ComputingMilieux_MISCELLANEOUS ,Basis set - Abstract
A supervised, semiautomated approach to force field parameter fitting is described and applied to the SIBFA polarizable force field. The I-NoLLS interactive, nonlinear least squares fitting program is used as an engine for parameter refinement while keeping parameter values within a physical range. Interactive fitting is shown to avoid many of the stability problems that frequently afflict highly correlated, nonlinear fitting problems occurring in force field parametrizations. The method is used to obtain parameters for the H2O, formamide, and imid-azole molecular fragments and their complexes with the Mg2+ cation. Reference data obtained from ab initio calculations using an auc-cc-pVTZ basis set exploit advances in modern computer hardware to provide a more accurate parametrization of SIBFA than has previously been available. (C) 2014 Wiley Periodicals, Inc.
- Published
- 2014
35. A note on implementation of decaying product correlation structures for quasi-least squares
- Author
-
Justine Shults and Matthew Guerra
- Subjects
Statistics and Probability ,Matrix (mathematics) ,Mathematical optimization ,Bernoulli's principle ,Epidemiology ,Non-linear least squares ,Product (mathematics) ,Estimator ,Applied mathematics ,Generalized least squares ,Least squares ,Variable (mathematics) ,Mathematics - Abstract
This note implements an unstructured decaying product matrix via the quasi-least squares approach for estimation of the correlation parameters in the framework of generalized estimating equations. The structure we consider is fairly general without requiring the large number of parameters that are involved in a fully unstructured matrix. It is straightforward to show that the quasi-least squares estimators of the correlation parameters yield feasible values for the unstructured decaying product structure. Furthermore, subject to conditions that are easily checked, the quasi-least squares estimators are valid for longitudinal Bernoulli data. We demonstrate implementation of the structure in a longitudinal clinical trial with both a continuous and binary outcome variable.
- Published
- 2014
36. Miscellaneous nonlinear estimation tools for R
- Author
-
John C. Nash
- Subjects
Estimation ,Nonlinear system ,Computer science ,Non-linear least squares ,Maximum likelihood ,Statistics ,Applied mathematics ,Generalized least squares ,Least squares - Published
- 2014
37. Nonlinear least squares
- Author
-
John C. Nash
- Subjects
Mathematical optimization ,Residual sum of squares ,Non-linear least squares ,Applied mathematics ,Least trimmed squares ,Generalized least squares ,Total least squares ,Non-linear iterative partial least squares ,Least squares ,Mathematics - Published
- 2014
38. Partial least squares discriminant analysis: taking the magic away
- Author
-
Richard G. Brereton and Gavin R. Lloyd
- Subjects
business.industry ,Applied Mathematics ,Magic (programming) ,Centroid ,Pattern recognition ,Overfitting ,Linear discriminant analysis ,Analytical Chemistry ,Euclidean distance ,Exploratory data analysis ,Non-linear least squares ,Partial least squares regression ,Statistics ,Artificial intelligence ,business ,Mathematics - Abstract
Partial least squares discriminant analysis (PLS-DA) has been available for nearly 20 years yet is poorly understood by most users. By simple examples, it is shown graphically and algebraically that for two equal class sizes, PLS-DA using one partial least squares (PLS) component provides equivalent classification results to Euclidean distance to centroids, and by using all nonzero components to linear discriminant analysis. Extensions where there are unequal class sizes and more than two classes are discussed including common pitfalls and dilemmas. Finally, the problems of overfitting and PLS scores plots are discussed. It is concluded that for classification purposes, PLS-DA has no significant advantages over traditional procedures and is an algorithm full of dangers. It should not be viewed as a single integrated method but as step in a full classification procedure. However, despite these limitations, PLS-DA can provide good insight into the causes of discrimination via weights and loadings, which gives it a unique role in exploratory data analysis, for example in metabolomics via visualisation of significant variables such as metabolites or spectroscopic peaks. Copyright © 2014 John Wiley & Sons, Ltd.
- Published
- 2014
39. Thermodynamic Parameters of Bonds in Glassy Materials from Shear Viscosity Coefficient Data
- Author
-
Michael I. Ojovan
- Subjects
Nonlinear system ,Materials science ,Non-linear least squares ,Shear viscosity ,Enthalpy ,Thermodynamics ,Experimental data ,General Materials Science - Abstract
A simple analytical approach is proposed that provides numerical values of thermodynamic parameters of bonds (enthalpy and entropy of formation) in glassy materials from four shear viscosity coefficient data at four temperatures, two of which should be below Tg and other two above Tg. This method replaces complex fitting procedures for the nonlinear viscosity equations to experimental data and provides accurate data close to that obtained from fitting procedures. It can be also used for obtaining zero estimates of unknown parameters for the nonlinear least squares regression procedures.
- Published
- 2014
40. Developing the CGLS algorithm for the least squares solutions of the general coupled matrix equations
- Author
-
Masoud Hajarian
- Subjects
Matrix (mathematics) ,Iterative method ,Group (mathematics) ,General Mathematics ,Non-linear least squares ,Conjugate gradient method ,Linear system ,General Engineering ,Algorithm ,Finite set ,Least squares ,Mathematics - Abstract
In the present paper, we consider the minimum norm solutions of the general least squares problem By developing the conjugate gradient least square (CGLS) method, we construct an efficient iterative method to solve this problem. The constructed iterative method can compute the solution group of the problem within a finite number of iterations in the absence of roundoff errors. Also it is shown that the method is stable and robust. Finally, by some numerical experiments, we demonstrate that the iterative method is effective and efficient. Copyright © 2013 John Wiley & Sons, Ltd.
- Published
- 2013
41. Experimental and Modeling Study of Melt Polycondensation Process of PA-MXD6
- Author
-
Like Chen, Yong Zhao, Zhenhao Xi, and Ling Zhao
- Subjects
Mass transfer coefficient ,Work (thermodynamics) ,Materials science ,Polymers and Plastics ,Organic Chemistry ,Kinetics ,Thermodynamics ,Adipamide ,Condensed Matter Physics ,Reversible reaction ,chemistry.chemical_compound ,chemistry ,Diffusion process ,Non-linear least squares ,Mass transfer ,Polymer chemistry ,Materials Chemistry - Abstract
Summary The main polycondensation reaction of poly(m-xylene adipamide) (PA-MXD6) is a reversible reaction strongly coupled with mass transfer in melt polycondensation process. In this work, a realistic model for melt polycondensation of PA-MXD6 has been proposed taking in to account the kinetics data, equilibrium data and diffusion process for major by-product water in the melts. The characteristics of reaction and the effects of mass-transfer of polycondensation process have been studied by using stagnant film experiments. It is observed that the apparent rate of the polycondensation process increases with higher temperature, lower degree of vacuum and thinner film thickness with reduced specific interfacial area. Based on the experimental data, the model parameters including the kinetics data, equilibrium data and mass transfer coefficient of volatile have been estimated by the nonlinear least squares method. The model predictions are in a quite satisfactory agreement with the experimental data that all of the relative deviations are almost less than 2%.
- Published
- 2013
42. A multidimensional approach to the analysis of chemical shift titration experiments in the frame of a multiple reaction scheme
- Author
-
Nicolas Giraud, Olivier Maury, Elise Dumont, and Anthony D'Aléo
- Subjects
Lanthanide ,010405 organic chemistry ,Chemistry ,Frame (networking) ,Analytical chemistry ,General Chemistry ,Derivative ,010402 general chemistry ,01 natural sciences ,0104 chemical sciences ,Data set ,Non-linear least squares ,Hyperparameter optimization ,Molecule ,General Materials Science ,Titration ,Biological system - Abstract
We present a method for fitting curves acquired by chemical shift titration experiments, in the frame of a three-step complexation mechanism. To that end, we have implemented a fitting procedure, based on a nonlinear least squares fitting method, that determines the best fitting curve using a “coarse grid search” approach and provides distributions for the different parameters of the complexation model that are compatible with the experimental precision. The resulting analysis protocol is first described and validated on a theoretical data set. We show its ability to converge to the true parameter values of the simulated reaction scheme and to evaluate complexation constants together with multidimensional uncertainties. Then, we apply this protocol to the study of the supramolecular interactions, in aqueous solution, between a lanthanide complex and three different model molecules, using NMR titration experiments. We show that within the uncertainty that can be evaluated from the parameter distributions generated during our analysis, the affinities between the lanthanide derivative and each model molecule can be discriminated, and we propose values for the corresponding thermodynamic constants. Copyright © 2013 John Wiley & Sons, Ltd.
- Published
- 2013
43. Data-based output feedback control using least squares estimation method for a class of nonlinear systems
- Author
-
Derong Liu and Zhuo Wang
- Subjects
Mathematical model ,Computer science ,Mechanical Engineering ,General Chemical Engineering ,MathematicsofComputing_NUMERICALANALYSIS ,Biomedical Engineering ,Aerospace Engineering ,Measure (mathematics) ,Industrial and Manufacturing Engineering ,symbols.namesake ,Nonlinear system ,Matrix (mathematics) ,Control and Systems Engineering ,Control theory ,Non-linear least squares ,Jacobian matrix and determinant ,Convergence (routing) ,symbols ,Electrical and Electronic Engineering ,Constant (mathematics) - Abstract
SUMMARY This paper develops a data-based output feedback control method for a class of nonlinear systems, which have unknown mathematical models. The dynamic model of the system is assumed to be smooth, while the corresponding Jacobian matrices are constant matrices in each sampling period. We employ a zero-order hold and a fast sampling technique to sample and measure the output signal. When these measured data contain white noises, we use the least squares method to estimate the corresponding Jacobian matrices. The feedback gain matrix is calculated and adjusted adaptively in real time according to them. Theoretical analysis on the convergence condition is provided, and simulation results are used to demonstrate the feasibility of this method. Copyright © 2013 John Wiley & Sons, Ltd.
- Published
- 2013
44. Investigation of the porous structure of cellulosic substrates through confocal laser scanning microscopy
- Author
-
Larry P. Walker, Jose M. Moran-Mirabal, Jean-Yves Parlange, and Dong Yang
- Subjects
Materials science ,Filter paper ,Resolution (electron density) ,Analytical chemistry ,Bioengineering ,Applied Microbiology and Biotechnology ,chemistry.chemical_compound ,chemistry ,Diffusion process ,Non-linear least squares ,Diffusion (business) ,Cellulose ,Porosity ,Porous medium ,Biological system ,Biotechnology - Abstract
At the most fundamental level, saccharification occurs when cell wall degrading enzymes (CWDEs) diffuse, bind to and react on readily accessible cellulose fibrils. Thus, the study of the diffusive behavior of solutes into and out of cellulosic substrates is important for understanding how biomass pore size distribution affects enzyme transport, binding, and catalysis. In this study, fluorescently labeled dextrans with molecular weights of 20, 70, and 150 kDa were used as probes to assess their diffusion into the porous structure of filter paper. Fluorescence microscopy with high numerical aperture objectives was used to generate high temporal and spatial resolution datasets of probe concentrations versus time. In addition, two diffusion models, including a simple transient diffusion and a pore grouping diffusion models, were developed. These models and the experimental datasets were used to investigate solute diffusion in macro- and micro-pores. Nonlinear least squares fitting of the datasets to the simple transient model yielded diffusion coefficient estimates that were inadequate for describing the initial fast diffusion and the later slow diffusion rates observed; on the other hand, nonlinear least squares fitting of the datasets to the pore grouping diffusion model yielded estimations of the micro-pore diffusion coefficient that described the inherently porous structure of plant-derived cellulose. In addition, modeling results show that on average 75% of the accessible pore volume is available for fast diffusion without any significant pore hindrance. The method developed can be applied to study the porous structure of plant-derived biomass and help assess the diffusion process for enzymes with known sizes. Biotechnol. Bioeng. 2013;110: 2836–2845. © 2013 Wiley Periodicals, Inc.
- Published
- 2013
45. Dynamic Properties of Golden Delicious and Red Delicious Apple under Normal Contact Force Models
- Author
-
Hossein Barikloo and Ebrahim Ahmadi
- Subjects
Normal force ,Materials science ,business.industry ,Pendulum ,Pharmaceutical Science ,Stiffness ,Structural engineering ,Discrete element method ,Viscoelasticity ,Contact force ,Dynamic loading ,Non-linear least squares ,medicine ,medicine.symptom ,business ,Food Science - Abstract
Dynamic forces cause the most bruise damage during fruit transport and handling. In order to reduce this damage, it is necessary to model the impact forces. Limited information is available in the literature about the behavior dynamic and dynamic modeling for apple. The normal force-displacement relationship between a viscoelastic sphere (fruit apple) and an impactor was analyzed using the Kuwabara and Kond and Tsuji models as normal force models. Fruits were subjected to dynamic loading by means of a pendulum at impact levels. In order to estimate the parameters (spring, “k” and damping, “c”) of the mentioned models for normal impacts, the nonlinear least squares technique (optimization method) was used. Practical Applications A very promising approach for the simulation of fruit impact damage during transport and handling is the contact force discrete element method (DEM). In order to do so, models for the forces acting between particles (like fruits) in contact need to be specified. Forces acting between the two particles are decomposed into normal and tangential components. In this paper, the focus is set on normal contacts. The presented research determines the parameters of normal contact forces models suited for DEM simulation of viscoelastic materials (fruit).
- Published
- 2013
46. Fault detection for LPV systems using model parameters that can be estimated via linear least squares
- Author
-
Michel Verhaegen, Balázs Kulcsár, and Jianfei Dong
- Subjects
Observer (quantum physics) ,Mechanical Engineering ,General Chemical Engineering ,Biomedical Engineering ,Aerospace Engineering ,Residual ,Industrial and Manufacturing Engineering ,Fault detection and isolation ,Quadratic equation ,Control and Systems Engineering ,Control theory ,Non-linear least squares ,Affine transformation ,Electrical and Electronic Engineering ,Linear least squares ,Statistical hypothesis testing ,Mathematics - Abstract
This paper presents a fault detection approach for discrete-time affine linear parameter varying systems with additive faults. A finite horizon input-output linear parameter varying model is used to obtain a linear in the model parameter regression residual form. The bias in the residual term vanishes because of quadratic stability of an underlying observer. The new methodology avoids projecting the residual onto a parity space, which in real time requires at least quadratic computational complexity. When neglecting the bias, the fault detection is carried out by an χ2 hypothesis test. Finally, the algorithm uses model parameters that can be identified prior to the on-line fault detection with linear least squares. A realtime experiment is carried out to demonstrate the viability of the proposed method.
- Published
- 2013
47. Appendix 2: Nonlinear Least‐squares Minimisation
- Author
-
Daniel J. Duffy and Andrea Germani
- Subjects
Non-linear least squares ,Calculus ,Applied mathematics ,Minimisation (clinical trials) ,Mathematics - Published
- 2013
48. Gaussian quadrature 4D-Var
- Author
-
Roel J. J. Stappers and Jan Barkmeijer
- Subjects
Atmospheric Science ,Nonlinear system ,symbols.namesake ,Operator (computer programming) ,Fixed-point iteration ,Linearization ,Non-linear least squares ,Mathematical analysis ,Trajectory ,symbols ,Tangent ,Gaussian quadrature ,Mathematics - Abstract
A new incremental four-dimensional variational (4D-Var) data assimilation algorithm is introduced. The algorithm does not require the computationally expensive integrations with the nonlinear model in the outer loops. Nonlinearity is accounted for by modifying the linearization trajectory of the observation operator based on integrations with the tangent linear (TL) model. This allows us to update the linearization trajectory of the observation operator in the inner loops at negligible computational cost. As a result the distinction between inner and outer loops is no longer necessary. The key idea on which the proposed 4D-Var method is based is that by using Gaussian quadrature it is possible to get an exact correspondence between the nonlinear time evolution of perturbations and the time evolution in the TL model. It is shown that J-point Gaussian quadrature can be used to derive the exact adjoint-based observation impact equations and furthermore that it is straightforward to account for the effect of multiple outer loops in these equations if the proposed 4D-Var method is used. The method is illustrated using a three-level quasi-geostrophic model and the Lorenz (1996) model.
- Published
- 2012
49. On Sensitivity of Inverse Response Plot Estimation and the Benefits of a Robust Estimation Approach
- Author
-
Simon J. Sheather and Luke A. Prendergast
- Subjects
Statistics and Probability ,Mathematical optimization ,Transformation (function) ,Simple (abstract algebra) ,Linearization ,Non-linear least squares ,Ordinary least squares ,Applied mathematics ,Sensitivity (control systems) ,Statistics, Probability and Uncertainty ,Plot (graphics) ,Regression ,Mathematics - Abstract
Inverse response plots are a useful tool in determining a response transformation function for response linearization in regression. Under some mild conditions it is possible to seek such transformations by plotting ordinary least squares fits versus the responses. A common approach is then to use nonlinear least squares to estimate a transformation by modelling the fits on the transformed response where the transformation function depends on an unknown parameter to be estimated. We provide insight into this approach by considering sensitivity of the estimation via the influence function. For example, estimation is insensitive to the method chosen to estimate the fits in the initial step. Additionally, the inverse response plot does not provide direct information on how well the transformation parameter is being estimated and poor inverse response plots may still result in good estimates. We also introduce a simple robustified process that can vastly improve estimation.
- Published
- 2012
50. EstimatingT1from multichannel variable flip angle SPGR sequences
- Author
-
Stephen J. Riederer, Armando Manduca, Petrice M. Mostardi, and Joshua D. Trzasko
- Subjects
Variable (computer science) ,Sequence ,Flip angle ,Fitting methods ,Computer science ,Non-linear least squares ,Echo (computing) ,Statistics ,Radiology, Nuclear Medicine and imaging ,Image enhancement ,Algorithm ,Exponential function - Abstract
Quantitative estimation of T1 is a challenging but important task inherent to many clinical applications. The most commonly used paradigm for estimating T1 in vivo involves performing a sequence of spoiled gradient-recalled echo acquisitions at different flip angles, followed by fitting of an exponential model to the data. Although there has been substantial work comparing different fitting methods, there has been little discussion on how these methods should be applied for data acquired using multichannel receivers. In this note, we demonstrate that the manner in which multichannel data is handled can have a substantial impact on T1 estimation performance and should be considered equally as important as choice of flip angles or fitting strategy.
- Published
- 2012
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.