90 results on '"Local convergence"'
Search Results
2. Proximal Point Method for Quasiconvex Functions in Riemannian Manifolds.
- Author
-
Quiroz, Erik Alex Papa
- Subjects
- *
RIEMANNIAN manifolds , *EIGENFUNCTIONS , *CURVATURE , *ALGORITHMS - Abstract
This paper studies the convergence of the proximal point method for quasiconvex functions in finite dimensional complete Riemannian manifolds. We prove initially that, in the general case, when the objective function is proper and lower semicontinuous, each accumulation point of the sequence generated by the method, if it exists, is a limiting critical point of the function. Then, under the assumptions that the sectional curvature of the manifold is bounded above by some non negative constant and the objective function is quasiconvex we analyze two cases. When the constant is zero, the global convergence of the algorithm to a limiting critical point is assured and if it is positive, we prove the local convergence for a class of quasiconvex functions, which includes Lipschitz functions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Numerical study of the coefficient identification algorithm based on ensembles of adjoint problem solutions for a production-destruction model.
- Author
-
Penenko, Alexey V., Mukatova, Zhadyra S., and Salimova, Akzhan B.
- Subjects
- *
ALGORITHMS , *INVERSE problems , *ADJOINT differential equations , *PROBLEM solving - Abstract
A numerical algorithm for the solution of an inverse coefficient problem for nonstationary, nonlinear production-destruction type model is proposed and tested on an example of the Lorenz'63 system. With an ensemble of adjoint problem solutions, the inverse problem is transformed into a quasi-linear matrix problem and solved with Newton-type algorithm. Two different ways of the adjoint ensemble construction are compared. In the first one, a trigonometric basis is used. In the second one in situ measurements are taken into account. Local convergence properties of the algorithm are studied numerically to find out when the use of more data can lead to the degradation of the reconstruction results. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. CONVERGENCE ANALYSIS OF GRADIENT ALGORITHMS ON RIEMANNIAN MANIFOLDS WITHOUT CURVATURE CONSTRAINTS AND APPLICATION TO RIEMANNIAN MASS.
- Author
-
JINHUA WANG, XIANGMEI WANG, CHONG LI, and JEN-CHIH YAO
- Subjects
- *
RIEMANNIAN manifolds , *CURVATURE , *ALGORITHMS , *CENTER of mass , *MAXIMA & minima - Abstract
We study the convergence issue for the gradient algorithm (employing general step sizes) for optimization problems on general Riemannian manifolds (without curvature constraints). Under the assumption of the local convexity/quasi-convexity (resp., weak sharp minima), local/global convergence (resp., linear convergence) results are established. As an application, the linear convergence properties of the gradient algorithm employing the constant step sizes and the Armijo step sizes for finding the Riemannian Lp (p ∈ [1, + ∞)) centers of mass are explored, respectively, which in particular extend and/or improve the corresponding results in [B. Afsari, R. Tron, and R. Vidal, SIAM J. Control Optim., 51 (2013), pp. 2230-2260; G. C. Bento et al., J. Optim. Theory Appl., 183 (2019), pp. 977-992]. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
5. APPROXIMATE MATRIX AND TENSOR DIAGONALIZATION BY UNITARY TRANSFORMATIONS: CONVERGENCE OF JACOBI-TYPE ALGORITHMS.
- Author
-
USEVICH, KONSTANTIN, JIANZE LI, and COMON, PIERRE
- Subjects
- *
UNITARY transformations , *COMPLEX matrices , *ALGORITHMS , *MATRICES (Mathematics) - Abstract
We propose a gradient-based Jacobi algorithm for a class of maximization problems on the unitary group, with a focus on approximate diagonalization of complex matrices and tensors by unitary transformations. We provide weak convergence results, and prove local linear convergence of this algorithm. The convergence results also apply to the case of real-valued tensors. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
6. Local convergence analysis of inverse iteration algorithm for computing the H-spectral radius of a nonnegative weakly irreducible tensor.
- Author
-
Sheng, Zhou, Ni, Qin, and Yuan, Gonglin
- Subjects
- *
NEWTON-Raphson method , *RADIUS (Geometry) , *ALGORITHMS , *NONLINEAR equations , *INVERSE functions - Abstract
Abstract In this paper, we present an inverse iteration algorithm, to find the H-spectral radius and the associated positive eigenvector of a nonnegative weakly irreducible tensor, which always preserve the positivity of approximate eigenvectors. The local quadratical convergence of the proposed algorithm is established based on a basic result of the Newton's method for solving nonlinear equations. Some numerical examples illustrate the efficiency of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
7. Union Averaged Operators with Applications to Proximal Algorithms for Min-Convex Functions.
- Author
-
Dao, Minh N. and Tam, Matthew K.
- Subjects
- *
NONEXPANSIVE mappings , *ALGORITHMS - Abstract
In this paper, we introduce and study a class of structured set-valued operators, which we call union averaged nonexpansive. At each point in their domain, the value of such an operator can be expressed as a finite union of single-valued averaged nonexpansive operators. We investigate various structural properties of the class and show, in particular, that is closed under taking unions, convex combinations, and compositions, and that their fixed point iterations are locally convergent around strong fixed points. We then systematically apply our results to analyze proximal algorithms in situations, where union averaged nonexpansive operators naturally arise. In particular, we consider the problem of minimizing the sum two functions, where the first is convex and the second can be expressed as the minimum of finitely many convex functions. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
8. A central path interior point method for nonlinear programming and its local convergence.
- Author
-
Qiu, Songqiang and Chen, Zhongwen
- Subjects
- *
NONLINEAR programming , *STOCHASTIC convergence , *ALGORITHMS , *NUMERICAL analysis , *MATHEMATICAL models - Abstract
In this paper, we present an interior point method for nonlinear programming that avoids the use of penalty function or filter. We use an adaptively perturbed primal dual interior point framework to computer trial steps and a central path technique is used to keep the iterate bounded away from 0 and not to deviate too much from the central path. A trust-funnel-like strategy is adopted to drive convergence. We also use second-order correction (SOC) steps to achieve fast local convergence by avoiding Maratos effect. Furthermore, the presented algorithm can avoid the blocking effect. It also does not suffer the blocking of productive steps that other trust-funnel-like algorithm may suffer. We show that, under second-order sufficient conditions and strict complementarity, the full Newton step (combined with an SOC step) will be accepted by the algorithm near the solution, and hence the algorithm is superlinearly local convergent. Numerical experiments results, which are encouraging, are reported. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
9. Psoriasis Lesion Detection Using Hybrid Seeker Optimization-based Image Clustering
- Author
-
Ritesh Raj, Rajendra S. Sonawane, Subhojit Ghosh, Manoranjan Dash, and Narendra D. Londhe
- Subjects
education.field_of_study ,Jaccard index ,Computer science ,business.industry ,Population ,Pattern recognition ,Image segmentation ,Fuzzy logic ,Swarm intelligence ,Local convergence ,Image Processing, Computer-Assisted ,Cluster Analysis ,Humans ,Psoriasis ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Artificial intelligence ,education ,business ,Cluster analysis ,Algorithms - Abstract
Background: In recent years, there has been a massive increase in the number of people suffering from psoriasis. For proper psoriasis diagnosis, psoriasis lesion segmentation is a prerequisite for quantifying the severity of this disease. However, segmentation of psoriatic lesions cannot be evaluated just by visual inspection as they exhibit inter and intra variability among the severity classes. Most of the approaches currently pursued by dermatologists are subjective in nature. The existing conventional clustering algorithm for objective segmentation of psoriasis lesion suffers from limitations of premature local convergence. Objective: An alternative method for psoriatic lesion segmentation with objective analysis is sought in the present work. The present work aims at obtaining optimal lesion segmentation by adopting an evolutionary optimization technique that possesses a higher probability of global convergence for psoriasis lesion segmentation. Method: A hybrid evolutionary optimization technique based on the combination of two swarm intelligence algorithms, namely Artificial Bee Colony and Seeker Optimization algorithm, has been proposed. The initial population for the hybrid technique is obtained from the two conventional local- based approaches, i.e., Fuzzy C-means and K-means clustering algorithms. Results: The initial population selection from the convergence of classical techniques reduces the effect of population dynamics on the final solution and hence yields precise lesion segmentation with a Jaccard Index of 0.91 from 720 psoriasis images. Conclusion: The performance comparison reflects the superior performance of the proposed algorithm over other swarm intelligence and conventional clustering algorithms.
- Published
- 2021
10. The local and semilocal convergence analysis of new Newton-like iteration methods.
- Author
-
KARAKAYA, Vatan, DOĞAN, Kadri, ATALAN, Yunus, and BOUZARA, Nour El Houda
- Subjects
- *
NEWTON-Raphson method , *SEMILOCAL rings , *STOCHASTIC convergence , *ALGORITHMS , *PICARD schemes - Abstract
The aim of this paper is to find new iterative Newton-like schemes inspired by the modified Newton iterative algorithm and prove that these iterations are faster than the existing ones in the literature. We further investigate their behavior and finally illustrate the results by numerical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
11. AN ADAPTIVE TRUST REGION ALGORITHM FOR LARGE-RESIDUAL NONSMOOTH LEAST SQUARES PROBLEMS.
- Author
-
Sheng, Zhou, Yuan, Gonglin, Cui, Zengru, Duan, Xiabin, and Wang, Xiaoliang
- Subjects
RESIDUALS (Payments) ,NONSMOOTH optimization ,LEAST squares ,ECONOMIC convergence ,PROBLEM solving ,ALGORITHMS - Abstract
In this paper, an adaptive trust region algorithm in which the trust region radius converges to zero is presented for solving large-residual nonsmooth least squares problems. This algorithm uses the smoothing technique of the approximation function, and it combines an adaptive trust region radius. Moreover, this algorithm differs from the existing methods for solving nonsmooth equations through use of the approximation function of second-order information, which improves the convergence rate for large-residual nonsmooth least squares problems. Under some suitable conditions, the global and local superlinear convergences of the proposed method are proven. The preliminary numerical results indicate that the proposed algorithm is effective and suitable for solving large-residual nonsmooth least squares problems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
12. Improving the accessibility of Steffensen’s method by decomposition of operators.
- Author
-
Hernández-Verón, M.A. and Martínez, Eulalia
- Subjects
- *
ITERATIVE methods (Mathematics) , *STOCHASTIC convergence , *NONDIFFERENTIABLE functions , *NEWTON-Raphson method , *ALGORITHMS - Abstract
Solving equations of the form H ( x ) = 0 is usually done by applying iterative methods. The main interest of this paper is to improve the domain of starting points for Steffensen’s method. In general, the accessibility of iterative methods that use divided differences in their algorithms is reduced, since there are difficulties in the choice of starting points to guarantee the convergence of the methods. In particular, by using a decomposition of the operator H and applying a special type of iterative methods, which combine two iterative schemes in the algorithms, we can improve the accessibility of Steffensen’s method. Moreover, we analyze the local convergence of the new iterative method proposed in two cases: when H is differentiable and H is non-differentiable. The dynamical properties show that the method also improves the region of accessibility of Steffensen’s method for non-differentiable operators. So, we present an alternative for the non-applicability of Newton’s method to non-differentiable operators that improves the accessibility of Steffensen’s method. The theoretical results are illustrated with numerical experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
13. On the Local Convergence Analysis of the Gradient Sampling Method for Finite Max-Functions.
- Author
-
Helou, Elias, Santos, Sandra, and Simões, Lucas
- Subjects
- *
SAMPLING methods , *ALGORITHMS , *MATHEMATICAL optimization , *MATHEMATICAL analysis , *CAUCHY problem , *PARTIAL differential equations - Abstract
The gradient sampling method is a recently developed tool for solving unconstrained nonsmooth optimization problems. Using just first-order information about the objective function, it generalizes the steepest descent method, one of the most classical methods for minimizing a smooth function. This study aims at determining under which circumstances one can expect the same local convergence result of the Cauchy method for the gradient sampling algorithm under the assumption that the problem is stated by a finite max-function around the optimal point. Additionally, at the end, we show how to practically accomplish the required hypotheses during the execution of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
14. Estimating the Local Radius of Convergence for Picard Iteration.
- Author
-
Măruşter, Ştefan
- Subjects
- *
STOCHASTIC convergence , *FIXED point theory , *HILBERT space , *NEWTON-Raphson method , *ALGORITHMS - Abstract
In this paper, we propose an algorithm to estimate the radius of convergence for the Picard iteration in the setting of a real Hilbert space. Numerical experiments show that the proposed algorithm provides convergence balls close to or even identical to the best ones. As the algorithm does not require to evaluate the norm of derivatives, the computing effort is relatively low. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
15. A trust region algorithm with a worst-case iteration complexity of $$\mathcal{O}(\epsilon ^{-3/2})$$ for nonconvex optimization.
- Author
-
Curtis, Frank, Samadi, Mohammadreza, and Robinson, Daniel
- Subjects
- *
NONSMOOTH optimization , *ALGORITHMS , *NONLINEAR estimation , *EVALUATION , *ITERATIVE methods (Mathematics) - Abstract
We propose a trust region algorithm for solving nonconvex smooth optimization problems. For any $$\overline{\epsilon }\in (0,\infty )$$ , the algorithm requires at most $$\mathcal{O}(\epsilon ^{-3/2})$$ iterations, function evaluations, and derivative evaluations to drive the norm of the gradient of the objective function below any $$\epsilon \in (0,\overline{\epsilon }]$$ . This improves upon the $$\mathcal{O}(\epsilon ^{-2})$$ bound known to hold for some other trust region algorithms and matches the $$\mathcal{O}(\epsilon ^{-3/2})$$ bound for the recently proposed Adaptive Regularisation framework using Cubics, also known as the arc algorithm. Our algorithm, entitled trace, follows a trust region framework, but employs modified step acceptance criteria and a novel trust region update mechanism that allow the algorithm to achieve such a worst-case global complexity bound. Importantly, we prove that our algorithm also attains global and fast local convergence guarantees under similar assumptions as for other trust region algorithms. We also prove a worst-case upper bound on the number of iterations, function evaluations, and derivative evaluations that the algorithm requires to obtain an approximate second-order stationary point. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
16. Local convergence of a trust-region algorithm with line search filter technique for nonlinear constrained optimization.
- Author
-
Pei, Yonggang and Zhu, Detong
- Subjects
- *
STOCHASTIC convergence , *ALGORITHMS , *CONSTRAINED optimization , *NONLINEAR programming , *ITERATIVE methods (Mathematics) , *NUMERICAL analysis - Abstract
A trust-region algorithm in association with line search filter technique for solving nonlinear equality constrained programming is proposed in this paper. In the current iteration, the trial step providing sufficient descent is generated by solving a corresponding trust-region subproblem. Then, the step size is decided by backtracking line search together with filter technique to obtain the next iteration point. The advantage of this method is that resolving trust-region subproblem many times to determine a new iteration point in traditional trust-region method can be avoided and hence the expensive computation can be lessened. And the difficult decisions in regard to the choice of penalty parameters in the merit functions can be avoided by using filter technique. Second order correction steps are introduced in the proposed algorithm to overcome Maratos effect. Convergence analysis shows that fast local convergence can be achieved under some mild assumptions. The preliminary numerical results are reported. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
17. A study on the local convergence and the dynamics of Chebyshev-Halley-type methods free from second derivative.
- Author
-
Argyros, Ioannis and Magreñán, Á.
- Subjects
- *
STOCHASTIC convergence , *CHEBYSHEV approximation , *NONLINEAR equations , *NUMERICAL analysis , *ALGORITHMS - Abstract
We study the local convergence of Chebyshev-Halley-type methods of convergence order at least five to approximate a locally unique solution of a nonlinear equation. Earlier studies such as Behl (), Bruns and Bailey (Chem. Eng. Sci 32, 257-264, ), Candela and Marquina (Computing 44, 169-184, ), (Computing 45(4):355-367, ), Chicharro et al. (), Chun (Appl. Math. Comput, 190(2):1432-1437, ), Cordero et al. (Appl.Math. Lett. 26, 842-848, ), Cordero et al. (Appl. Math. Comput. 219, 8568-8583, ), Cordero and Torregrosa (Appl. Math. Comput. 190, 686-698, ), Ezquerro and Hernández (Appl. Math. Optim. 41(2):227-236, ), (BIT Numer. Math. 49, 325-342, ), (J. Math. Anal. Appl. 303, 591-601, ), Gutiérrez and Hernández (Comput. Math. Applic. 36(7):1-8, ), Ganesh and Joshi (IMA J. Numer. Anal. 11, 21-31, ), Hernández (Comput. Math. Applic. 41(3-4):433-455, ), Hernández and Salanova (Southwest J. Pure Appl. Math. 1, 29-40, ), Jarratt (Math. Comput. 20(95):434-437, ), Kou and Li (Appl. Math. Comput. 189, 1816-1821, ), Li (Appl. Math. Comput. 235, 221-225, ), Ren et al. (Numer. Algorithm. 52(4):585-603, ), Wang et al. (Numer. Algorithm. 57, 441-456, ), Kou et al. (Numer. Algorithm. 60, 369-390, ) show convergence under hypotheses on the third derivative or even higher. The convergence in this study is shown under hypotheses on the first derivative. Hence, the applicability of the method is expanded. The dynamical analyses of these methods are also studied. Finally, numerical examples are also provided to show that our results apply to solve equations in cases where earlier studies cannot apply. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
18. A novel single multiplicative neuron model trained by an improved glowworm swarm optimization algorithm for time series prediction.
- Author
-
Cui, Huimin, Feng, Jianxin, Guo, Jin, and Wang, Tingfeng
- Subjects
- *
PARTICLE swarm optimization , *TIME series analysis , *ALGORITHMS , *PREDICTION models , *ROBUST control , *DIFFERENTIAL evolution - Abstract
To better predict time series, in this paper the single multiplicative recurrent neuron (SMRN) is constructed by adding feedforward and feedback links at the nodes of the original single multiplicative neuron (SMN). Glowworm swarm optimization (GSO) algorithm as a method for training the parameters of various kinds of neural network models gets easily into locally optimal traps during optimization process and its movement stability is also poor because of no memory about search history. To overcome the aforementioned disadvantages, firstly the linearly declining inertia weight is incorporated into the location update formula of standard GSO (LWGSO). After that in order to further enhance the robustness capability, differential evolution (DE) algorithm is introduced into LWGSO forming LWGSODE. Standard unimodal and multi-modal static test functions in high dimensions have been used to test its properties. The statistically experimental results show that the proposed LWGSODE approach performs much better than basic GSO whatever in terms of solutions precision, robustness or convergence speed. Moreover, the function optimization results are also competitive when compared with other state-of-the-art methods in the literature. Finally, the LWGSODE algorithm is used to train SMRN for time series prediction, and approximation results have improved significantly. All the results obtained reveal the novel SMRN model combined with the proposed LWGSODE algorithm provides a promising means to approximate nonlinear series in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
19. A quick and accurate method for the estimation of covariate effects based on empirical Bayes estimates in mixed-effects modeling: Correction of bias due to shrinkage
- Author
-
Liping Zhang, Fangbiao Tao, Liang Zhao, Xu Steven Xu, Min Yuan, Yaning Yang, José Pinheiro, Jinfeng Xu, and Xiaohui Huang
- Subjects
Statistics and Probability ,Epidemiology ,Computer science ,Coronary Disease ,Empirical Research ,01 natural sciences ,010104 statistics & probability ,03 medical and health sciences ,Bayes' theorem ,0302 clinical medicine ,Bias ,Health Information Management ,Statistics ,Covariate ,Humans ,In patient ,030212 general & internal medicine ,0101 mathematics ,Estimation ,Models, Statistical ,Bayes Theorem ,Local convergence ,Nonlinear system ,Trajectory ,Mixed effects ,Algorithms - Abstract
Nonlinear mixed-effects modeling is a popular approach to describe the temporal trajectory of repeated measurements of clinical endpoints collected over time in clinical trials, to distinguish the within-subject and the between-subject variabilities, and to investigate clinically important risk factors (covariates) that may partly explain the between-subject variability. Due to the complex computing algorithms involved in nonlinear mixed-effects modeling, estimation of covariate effects is often time-consuming and error-prone owing to local convergence. We develop a fast and accurate estimation method based on empirical Bayes estimates from the base mixed-effects model without covariates, and simple regressions outside of the nonlinear mixed-effect modeling framework. Application of the method is illustrated using a pharmacokinetic dataset from an anticoagulation drug for the prevention of major cardiovascular events in patients with acute coronary syndrome. Both the application and extensive simulations demonstrated that the performance of this high-throughput method is comparable to the commonly used maximum likelihood estimation in nonlinear mixed-effects modeling.
- Published
- 2018
20. A novel algorithm to resolve lack of convergence and checkerboard instability in bone adaptation simulations using non-local averaging
- Author
-
J. Martínez-Reina, Jose L Calvo-Gallego, José Manuel García-Aznar, Peter Pivonka, Universidad de Sevilla. Departamento de Ingeniería Mecánica y de Fabricación, and Consejería de Economía, Conocimiento, Empresas y Universidad (Junta deAndalucía)
- Subjects
Checkerboard ,Computer science ,Finite Element Analysis ,0206 medical engineering ,Biomedical Engineering ,02 engineering and technology ,030204 cardiovascular system & hematology ,Instability ,Bone and Bones ,03 medical and health sciences ,0302 clinical medicine ,Bone Density ,medicine ,Bone adaptation ,Topology optimization ,Molecular Biology ,Mechanostat Theory ,Stress concentration ,Applied Mathematics ,Stiffness ,020601 biomedical engineering ,Finite element method ,Local convergence ,Computational Theory and Mathematics ,Modeling and Simulation ,Stress, Mechanical ,medicine.symptom ,FE analysis ,Convergence ,Algorithm ,Algorithms ,Software ,Smoothing - Abstract
Article number e3419 Checkerboard is a typical instability in finite element (FE) simulations of bone adaptation and topology optimization in general. It consists in a patchwork pattern with elements of alternating stiffness, producing lack of convergence and instabilities in the predicted bone density. Averaging techniques have been proposed to solve this problem. One of the most acknowledged techniques (node based formulation) has severe drawbacks such as: high sensitivity to mesh density and type of element integration (full vs reduced) and, more importantly, oscillatory solutions also leading to lack of convergence. We propose a new solution consisting in a non-local smoothing technique. It defines, as the mechanical stimulus governing bone adaptation in a certain integration point of the mesh, the average of the stimuli obtained in the neighbour integration points. That average is weighted with a decay function of the distance to the centre of the neighbourhood. The new technique has been shown to overcome all the referred problems and perform in a robust way. It was tested on a hollow cylinder, resembling the diaphysis of a long bone, subjected to bending or torsion. Checkerboard instability was eliminated and local convergence of bone adaptation was achieved rapidly, in contrast to the other averaging technique and to the model without control of checkerboard instability. The new algorithm was also tested with good results on the same geometry but in a model containing a void, which produces a stress concentration that usually leads to checkerboard instability, like in other applications such as simulations of bone-implant interfaces. Consejería de Economía, Conocimiento, Empresas y Universidad (Junta deAndalucía) P18-RT-3611
- Published
- 2020
21. Bacterial Foraging Optimization Based on Self-Adaptive Chemotaxis Strategy
- Author
-
Shen Ping, Lide Wang, Jun Di, and Huang Chen
- Subjects
Data Analysis ,0209 industrial biotechnology ,Mathematical optimization ,General Computer Science ,Article Subject ,Computer science ,General Mathematics ,Computer applications to medicine. Medical informatics ,Stability (learning theory) ,R858-859.7 ,Neurosciences. Biological psychiatry. Neuropsychiatry ,02 engineering and technology ,Swarm intelligence ,020901 industrial engineering & automation ,Local optimum ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Computer Simulation ,Information exchange ,Bacteria ,General Neuroscience ,Chemotaxis ,General Medicine ,Models, Theoretical ,Local convergence ,Range (mathematics) ,Test set ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Algorithms ,RC321-571 ,Research Article - Abstract
Bacterial foraging optimization (BFO) algorithm is a novel swarm intelligence optimization algorithm that has been adopted in a wide range of applications. However, at present, the classical BFO algorithm still has two major drawbacks: one is the fixed step size that makes it difficult to balance exploration and exploitation abilities; the other is the weak connection among the bacteria that takes the risk of getting to the local optimum instead of the global optimum. To overcome these two drawbacks of the classical BFO, the BFO based on self-adaptive chemotaxis strategy (SCBFO) is proposed in this paper. In the SCBFO algorithm, the self-adaptive chemotaxis strategy is designed considering two aspects: the self-adaptive swimming based on bacterial search state features and the improvement of chemotaxis flipping based on information exchange strategy. The optimization results of the SCBFO algorithm are analyzed with the CEC 2015 benchmark test set and compared with the results of the classical and other improved BFO algorithms. Through the test and comparison, the SCBFO algorithm proves to be effective in reducing the risk of local convergence, balancing the exploration and the exploitation, and enhancing the stability of the algorithm. Hence, the major contribution in this research is the SCBFO algorithm that provides a novel and practical strategy to deal with more complex optimization tasks.
- Published
- 2020
- Full Text
- View/download PDF
22. Global and local convergence of a penalty-free method for nonlinear programming
- Author
-
Chen, Zhongwen and Qiu, Songqiang
- Subjects
- *
STOCHASTIC convergence , *NONLINEAR programming , *ALGORITHMS , *CONSTRAINED optimization , *FEASIBILITY studies , *NUMERICAL analysis - Abstract
Abstract: We present a class of trust region algorithms that do not use any penalty function or a filter for nonlinear equality constrained optimization. In each iteration, the infeasibility is controlled by a progressively decreasing upper limit and trial steps are computed by a Byrd–Omojokun-type trust region strategy. Measures of optimality and infeasibility are computed, whose relationship serves as a criterion on which the algorithm decides which one to focus on improving. As a result, the algorithm keeps a balance between the improvements on optimality and feasibility even if no restoration phase which is required by filter methods is used. The framework of the algorithm ensures the global convergence without assuming regularity or boundedness on the iterative sequence. By using a second order correction strategy, Marato’s effect is avoided and fast local convergence is shown. The preliminary numerical results are reported. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
23. Multistability of -divergence based NMF algorithms
- Author
-
Yang, Shangming and Ye, Mao
- Subjects
- *
ALGORITHMS , *DIVERGENCE theorem , *NONNEGATIVE matrices , *FACTORIZATION , *INVARIANT sets , *LYAPUNOV stability - Abstract
Abstract: In this paper, the multistability of a class of Amari’s -divergence based nonnegative matrix factorization learning algorithms is analyzed. The analysis results show that invariant sets for the update algorithms can be constructed. In these invariant sets, the non-convergence of the discussed algorithms can be guaranteed. Based on Lyapunov’s stability theorem, the local convergence of this class of learning algorithms is proved in the domain of their update rules. In the simulation, the analysis results are applied to image representation. Experiment results demonstrate that selecting suitable initial data for different applications of these nonnegative matrix factorization algorithms is very important. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
24. On the local convergence of a derivative-free algorithm for least-squares minimization.
- Author
-
Zhang, Hongchao and Conn, Andrew
- Subjects
ALGORITHMS ,MATRIX derivatives ,NUMERICAL analysis ,STOCHASTIC convergence - Abstract
In Zhang et al. (accepted by SIAM J. Optim., ), we developed a class of derivative-free algorithms, called DFLS, for least-squares minimization. Global convergence of the algorithm as well as its excellent numerical performance within a limited computational budget was established and discussed in the same paper. Here we would like to establish the local quadratic convergence of the algorithm for zero residual problems. Asymptotic convergence performance of the algorithm for both zero and nonzero problems is tested. Our numerical experiments indicate that the algorithm is also very promising for achieving high accuracy solutions compared with software packages that do not exploit the special structure of the least-squares problem or that use finite differences to approximate the gradients. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
25. Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results.
- Author
-
Cartis, Coralia, Gould, Nicholas I. M., and Toint, Philippe L.
- Subjects
- *
MATHEMATICAL optimization , *ALGORITHMS , *NEWTON-Raphson method , *LIPSCHITZ spaces , *MATHEMATICAL analysis , *MATHEMATICS - Abstract
n Adaptive Regularisation algorithm using Cubics (ARC) is proposed for unconstrained optimization, generalizing at the same time an unpublished method due to Griewank (Technical Report NA/12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177-205, 2006) and a proposal by Weiser et al. (Optim Methods Softw 22(3):413-431, 2007). At each iteration of our approach, an approximate global minimizer of a local cubic regularisation of the objective function is determined, and this ensures a significant improvement in the objective so long as the Hessian of the objective is locally Lipschitz continuous. The new method uses an adaptive estimation of the local Lipschitz constant and approximations to the global model-minimizer which remain computationally-viable even for large-scale problems. We show that the excellent global and local convergence properties obtained by Nesterov and Polyak are retained, and sometimes extended to a wider class of problems, by our ARC approach. Preliminary numerical experiments with small-scale test problems from the CUTEr set show encouraging performance of the ARC algorithm when compared to a basic trust-region implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
26. Local convergence of filter methods for equality constrained non-linear programming.
- Author
-
Karas, Elizabeth W., Gonzaga, Clóvis C., and Ribeiro, Ademir A.
- Subjects
- *
STOCHASTIC convergence , *ALGORITHMS , *NONLINEAR programming , *LINEAR substitutions , *QUADRATIC programming - Abstract
In Gonzaga et al. [A globally convergent filter method for nonlinear programming, SIAM J. Optimiz. 14 (2003), pp. 646-669] we discuss general conditions to ensure global convergence of inexact restoration filter algorithms for non-linear programming. In this article we show how to avoid the Maratos effect by means of a second-order correction. The algorithms are based on feasibility and optimality phases, which can be either independent or not. The optimality phase differs from the original one only when a full Newton step for the tangential minimization of the Lagrangian is efficient but not acceptable by the filter method. In this case a second-order corrector step tries to produce an acceptable point keeping the efficiency of the rejected step. The resulting point is tested by trust region criteria. Under the usual hypotheses, the algorithm inherits the quadratic convergence properties of the feasibility and optimality phases. This article includes a comparison between classical Sequential Quadratic Programming (SQP) and Inexact Restoration (IR) iterations, showing that both methods share the same asymptotic convergence properties. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
27. Scaling iterative closest point algorithm for registration of m–D point sets
- Author
-
Du, Shaoyi, Zheng, Nanning, Xiong, Lei, Ying, Shihui, and Xue, Jianru
- Subjects
- *
ITERATIVE methods (Mathematics) , *ALGORITHMS , *IMAGE registration , *POINT set theory , *CALIBRATION , *CAMERAS , *THREE-dimensional imaging , *IMAGE reconstruction , *DIGITAL image processing - Abstract
Abstract: Point set registration is important for calibration of multiple cameras, 3D reconstruction and recognition, etc. The iterative closest point (ICP) algorithm is accurate and fast for point set registration in a same scale, but it does not handle the case with different scales. This paper instead introduces a novel approach named the scaling iterative closest point (SICP) algorithm which integrates a scale matrix with boundaries into the original ICP algorithm for scaling registration. At each iterative step of this algorithm, we set up correspondence between two m–D point sets, and then use a simple and fast iterative algorithm with the singular value decomposition (SVD) method and the properties of parabola incorporated to compute scale, rotation and translation transformations. The SICP algorithm has been proved to converge monotonically to a local minimum from any given parameters. Hence, to reach desired global minimum, good initial parameters are required which are successfully estimated in this paper by analyzing covariance matrices of point sets. The SICP algorithm is independent of shape representation and feature extraction, and thereby it is general for scaling registration of m–D point sets. Experimental results demonstrate its efficiency and accuracy compared with the standard ICP algorithm. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
28. An affine-scaling interior-point CBB method for box-constrained optimization.
- Author
-
Hager, William, Mair, Bernard, and Zhang, Hongchao
- Subjects
- *
ALGORITHMS , *MATHEMATICAL optimization , *STOCHASTIC convergence , *NUMERICAL analysis , *POSITRON emission tomography , *BOUNDARY value problems - Abstract
We develop an affine-scaling algorithm for box-constrained optimization which has the property that each iterate is a scaled cyclic Barzilai–Borwein (CBB) gradient iterate that lies in the interior of the feasible set. Global convergence is established for a nonmonotone line search, while there is local R-linear convergence at a nondegenerate local minimizer where the second-order sufficient optimality conditions are satisfied. Numerical experiments show that the convergence speed is insensitive to problem conditioning. The algorithm is particularly well suited for image restoration problems which arise in positron emission tomography where the cost function can be infinite on the boundary of the feasible set. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
29. Incomplete Jacobian Newton method for nonlinear equations
- Author
-
Liu, Hao and Ni, Qin
- Subjects
- *
MATRICES (Mathematics) , *ABSTRACT algebra , *STOCHASTIC convergence , *ALGORITHMS , *MATHEMATICAL programming - Abstract
Abstract: This paper presents a new modified Newton method for nonlinear equations. This method uses a part of elements of the Jacobian matrix to obtain the next iteration point and is refereed to as the incomplete Jacobian Newton (IJN) method. The IJN method may be fit for solving large scale nonlinear equations with dense Jacobian. The conditions of linear, superlinear and quadratic convergence of the IJN method are given and the local convergence results are analyzed and proved. Some special IJN algorithms are designed and numerical experiments are given. The results show that the IJN method is promising. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
30. Local analysis of the feasible primal-dual interior-point method.
- Author
-
Silva, R., Soares, J., and Vicente, L. N.
- Subjects
INTERIOR-point methods ,MATHEMATICAL programming ,STOCHASTIC convergence ,MATHEMATICAL functions ,ALGORITHMS ,MATHEMATICAL optimization - Abstract
In this paper we analyze the rate of local convergence of the Newton primal-dual interior-point method when the iterates are kept strictly feasible with respect to the inequality constraints. It is shown under the classical conditions that the rate is q-quadratic when the functions associated to the binding inequality constraints are concave. In general, the q-quadratic rate is achieved provided the step in the primal variables does not become asymptotically orthogonal to any of the gradients of the binding inequality constraints. Some preliminary numerical experience showed that the feasible method can be implemented in a relatively efficient way, requiring a reduced number of function and derivative evaluations. Moreover, the feasible method is competitive with the classical infeasible primal-dual interior-point method in terms of number of iterations and robustness. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
31. Newton-like method with some remarks
- Author
-
Wu, Xinyuan
- Subjects
- *
STOCHASTIC convergence , *NEWTON-Raphson method , *ALGORITHMS , *MATHEMATICS - Abstract
Abstract: This paper presents the new convergence analysis for the Newton-like method proposed by [Xinyuan Wu, A new continuation Newton-like method and its deformation, Appl. Math. Comput. 112 (2000) 75–78.] Compared with the original version of the convergence analysis, the restriction imposed on f″(x) is removed thoroughly. In order to guarantee the quadratic convergence of the Newton-like method, it is suffices to suppose that f′(x ∗)≠0 and f′(x) is local Lipschitz near x ∗, where x ∗ is a solution of nonlinear equation f(x)=0. Moreover, some comments are given with examples for the Newton-like method, in comparison with Newton’s method. It can be concluded that the new algorithm is more feasible, effective than Newton’s method. In particularly, when it happens that, the derivative of the function f(x) at an iterate is singular or almost singular, the Newton-like method is vast superior to the classical Newton method. The numerical results of the paper strongly support the conclusion. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
32. A Framework for Analyzing Local Convergence Properties with Applications to Proximal-Point Algorithms.
- Author
-
DONG, Y. D. and FISCHER, A.
- Subjects
- *
BANACH spaces , *ALGORITHMS , *HILBERT space , *DIFFERENTIAL inclusions , *STOCHASTIC convergence , *GENERALIZED estimating equations , *LIPSCHITZ spaces , *GENERALIZED spaces , *MATHEMATICAL optimization - Abstract
A general algorithmic scheme for solving inclusions in a Banach space is investigated in respect to its local convergence behavior. Particular emphasis is placed on applications to certain proximal-point-type algorithms in Hilbert spaces. The assumptions do not necessarily require that a solution be isolated. In this way, results existing for the case of a locally unique solution can be extended to cases with nonisolated solutions. Besides the convergence rates for the distance of the iterates to the solution set, strong convergence to a sole solution is shown as well. As one particular application of the framework, an improved convergence rate for an important case of the inexact proximal-point algorithm is derived. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
33. Particle Swarm Optimization for Single Objective Continuous Space Problems: A Review
- Author
-
Mohammad Reza Bonyadi and Zbigniew Michalewicz
- Subjects
Stochastic Processes ,0209 industrial biotechnology ,Mathematical optimization ,Meta-optimization ,MathematicsofComputing_NUMERICALANALYSIS ,Constrained optimization ,Imperialist competitive algorithm ,Particle swarm optimization ,02 engineering and technology ,Models, Theoretical ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,Pattern Recognition, Automated ,Local convergence ,Computational Mathematics ,020901 industrial engineering & automation ,Derivative-free optimization ,0202 electrical engineering, electronic engineering, information engineering ,Computer Simulation ,020201 artificial intelligence & image processing ,Multi-swarm optimization ,Metaheuristic ,Algorithms ,Mathematics - Abstract
This paper reviews recent studies on the Particle Swarm Optimization (PSO) algorithm. The review has been focused on high impact recent articles that have analyzed and/or modified PSO algorithms. This paper also presents some potential areas for future study.
- Published
- 2017
34. Local Convergence of an Inexact-Restoration Method and Numerical Experiments.
- Author
-
Birgin, E. G. and Martínez, J. M.
- Subjects
- *
STOCHASTIC convergence , *NONLINEAR programming , *DYNAMIC programming , *MATHEMATICAL programming , *INTEGER programming , *LINEAR programming , *MATHEMATICAL optimization , *MATHEMATICS , *ALGORITHMS - Abstract
Local convergence of an inexact-restoration method for nonlinear programming is proved. Numerical experiments are performed with the objective of evaluating the behavior of the purely local method against a globally convergent nonlinear programming algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
35. Line Search Filter Methods for Nonlinear Programming: Local Convergence.
- Author
-
Wächter, Andreas and Biegler, Lorenz T.
- Subjects
- *
NONLINEAR programming , *MANAGEMENT science , *MATHEMATICAL programming , *ALGORITHMS , *MATHEMATICAL optimization , *OPERATIONS research , *STOCHASTIC convergence - Abstract
A line search method is proposed for nonlinear programming using Fletcher and Leyffer's filter method, which replaces the traditional merit function. A simple modification of the method proposed in a companion paper [SIAM J. Optim., 16 (2005), pp. 1--31] introducing second order correction steps is presented. It is shown that the proposed method does not suffer from the Maratos effect, so that fast local convergence to second order sufficient local solutions is achieved. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
36. Choosing Measurement Poses for Robot Calibration with the Local Convergence Method and Tabu Search.
- Author
-
Daney, David, Papegay, Yves, and Madeline, Blaise
- Subjects
- *
ROBOTS , *CALIBRATION , *STOCHASTIC convergence , *PHYSICAL measurements , *JACOBIAN matrices , *ALGORITHMS - Abstract
The robustness of robot calibration with respect to sensor noise is sensitive to the manipulator poses used to collect measurement data. In this paper we propose an algorithm based on a constrained optimization method, which allows us to choose a set of measurement configurations. It works by selecting iteratively one pose after another inside the workspace. After a few steps, a set of configurations is obtained, which maximizes an index of observability associated with the identification Jacobian. This algorithm has been shown, in a former work, to be sensitive to local minima. This is why we propose here meta-heuristic methods to decrease this sensibility of our algorithm. Finally, a validation through the simulation of a calibration experience shows that using selected configurations significantly improve the kinematic parameter identification by dividing by 10-15 the noise associated with the results. Also, we present an application to the calibration of a parallel robot with a vision-based measurement device. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
37. A nonlinear approach to harmonic signal modeling
- Author
-
Abd-Elrady, Emad
- Subjects
- *
SIGNALS & signaling , *WIENER integrals , *MATHEMATICAL models , *ALGORITHMS - Abstract
Periodic signals can be estimated recursively by exploiting the fact that a sine wave passing through a static nonlinear function generates a spectrum of overtones. A real wave with unknown period in cascade with a piecewise linear function is therefore used as a parameterization for the estimated signal model. In this paper the driving periodic wave can be chosen depending on any prior knowledge. A recursive Gauss–Newton prediction error identification algorithm for joint estimation of the driving frequency and the parameters of the nonlinear output function is introduced. Local convergence properties as well as the Crame´r-Rao bound (CRB) are derived for the suggested algorithm. [Copyright &y& Elsevier]
- Published
- 2004
- Full Text
- View/download PDF
38. LOCALCONVERGENCE OF THE PROXIMALPOINT ALGORITHM AND MULTIPLIER METHODS WITHOUT MONOTONICITY.
- Author
-
Pennanen, Teemu
- Subjects
ALGORITHMS ,MONOTONIC functions ,ITERATIVE methods (Mathematics) ,NONLINEAR programming ,POINT mappings (Mathematics) ,STOCHASTIC convergence ,MATHEMATICAL programming - Abstract
This paper studies the convergence of the classical proximal point algorithm without assuming monotonicity of the underlying mapping. Practical conditions are given that guarantee the local convergence of the iterates to a solution of T(x) ∋ 0, where T is an arbitrary set-valued mapping from a Hilbert space to itself. In particular, when the problem is "nonsingular" in the sense that T[sup -1] has a Lipschitz localization around one of the solutions, local linear convergence is obtained. This kind of regularity property of variational inclusions has been extensively studied in the literature under the name of strong regularity, and it can be viewed as a natural generalization of classical constraint qualifications in nonlinear programming to more general problem classes. Combining the new convergence results with an abstract duality framework for variational inclusions, the author proves the local convergence of multiplier methods for a very general class of problems. This gives as special cases new convergence results for multiplier methods for nonmonotone variational inequalities and nonconvex nonlinear programming. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
39. A Convergence Analysis of Unconstrained and Bound Constrained Evolutionary Pattern Search.
- Author
-
Hart, William E.
- Subjects
- *
EVOLUTIONARY computation , *ALGORITHMS , *STOCHASTIC convergence - Abstract
We present and analyze a class of evolutionary algorithms for unconstrained and bound constrained optimization on R[sup n]: evolutionary pattern search algorithms (EPSAs). EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. We show that EPSAs can be cast as stochastic pattern search methods, and we use this observation to prove that EPSAs have a probabilistic, weak stationary point convergence theory. This convergence theory is distinguished by the fact that the analysis does not approximate the stochastic process of EPSAs, and hence it exactly characterizes their convergence properties. [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
40. An inexact projected LM type algorithm for solving convex constrained nonlinear equations.
- Author
-
Gonçalves, Douglas S., Gonçalves, Max L.N., and Oliveira, Fabrícia R.
- Subjects
- *
NONLINEAR equations , *ALGORITHMS - Abstract
In this paper, we propose two complementary variants of the projected Levenberg–Marquardt (LM) algorithm for solving convex constrained nonlinear equations. Since the orthogonal projection onto the feasible set may be computationally expensive, we first propose a local LM algorithm in which inexact projections are allowed. The feasible inexact projections used in our algorithm can be easily obtained by means of iterative methods, such as conditional gradient. Local convergence of the proposed algorithm is established by using an error bound condition which is weaker than the standard full-rank assumption. We further present and analyze a global version of this algorithm by means of a nonmonotone line search technique. Numerical experiments are reported to showcase the effectiveness of the proposed algorithms, especially when the projection onto the feasible set is difficult to compute. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
41. Extended High Order Algorithms for Equations under the Same Set of Conditions.
- Author
-
Argyros, Ioannis K., Sharma, Debasis, Argyros, Christopher I., Parhi, Sanjaya Kumar, Sunanda, Shanta Kumari, and Argyros, Michael I.
- Subjects
- *
ALGORITHMS , *EQUATIONS , *BANACH spaces - Abstract
A variety of strategies are used to construct algorithms for solving equations. However, higher order derivatives are usually assumed to calculate the convergence order. More importantly, bounds on error and uniqueness regions for the solution are also not derived. Therefore, the benefits of these algorithms are limited. We simply use the first derivative to tackle all these issues and study the ball analysis for two sixth order algorithms under the same set of conditions. In addition, we present a calculable ball comparison between these algorithms. In this manner, we enhance the utility of these algorithms. Our idea is very general. That is why it can also be used to extend other algorithms as well in the same way. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
42. A Novel Framework Based on FastICA for High Density Surface EMG Decomposition
- Author
-
Maoqi Chen and Ping Zhou
- Subjects
Computer science ,Speech recognition ,0206 medical engineering ,Biomedical Engineering ,Action Potentials ,02 engineering and technology ,Electromyography ,Sensitivity and Specificity ,Article ,Pattern Recognition, Automated ,03 medical and health sciences ,0302 clinical medicine ,Isometric Contraction ,Internal Medicine ,medicine ,Electrode array ,Humans ,Computer Simulation ,Muscle, Skeletal ,Principal Component Analysis ,Models, Statistical ,medicine.diagnostic_test ,business.industry ,General Neuroscience ,Rehabilitation ,Reproducibility of Results ,Pattern recognition ,020601 biomedical engineering ,Local convergence ,Motor unit ,Data Interpretation, Statistical ,Principal component analysis ,Pattern recognition (psychology) ,FastICA ,Spike (software development) ,Artificial intelligence ,business ,Algorithms ,030217 neurology & neurosurgery - Abstract
This study presents a progressive FastICA peel-off (PFP) framework for high density surface electromyogram (EMG) decomposition. The novel framework is based on a shift-invariant model for describing surface EMG. The decomposition process can be viewed as progressively expanding the set of motor unit spike trains, which is primarily based on FastICA. To overcome the local convergence of FastICA, a “peel off” strategy (i.e. removal of the estimated motor unit action potential (MUAP) trains from the previous step) is used to mitigate the effects of the already identified motor units, so more motor units can be extracted. Moreover, a constrained FastICA is applied to assess the extracted spike trains and correct possible erroneous or missed spikes. These procedures work together to improve the decomposition performance. The proposed framework was validated using simulated surface EMG signals with different motor unit numbers (30, 70, 91) and signal to noise ratios (SNRs) (20, 10, 0 dB). The results demonstrated relatively large numbers of extracted motor units and high accuracies (high F1-scores). The framework was also tested with 111 trials of 64-channel electrode array experimental surface EMG signals during the first dorsal interosseous (FDI) muscle contraction at different intensities. On average 14.1 ± 5.0 motor units were identified from each trial of experimental surface EMG signals.
- Published
- 2016
43. LOCAL CONVERGENCE IN A GENERALIZED FERMAT-WEBER PROBLEM.
- Author
-
Brimberg, J. and Love, R.F.
- Subjects
OPERATIONS research ,MANAGEMENT science ,LOCATION problems (Programming) ,ALGORITHMS ,MEASUREMENT of distances ,EUCLIDEAN algorithm ,MATHEMATICAL programming - Abstract
In the Fermat-Weber problem, the location of a source point in R
N is sought which minimizes the sum of weighted Euclidean distances lo a set of destinations. A classical iterative algorithm known as the Weiszfeld procedure is used to find the optimal location. Kuhn proves global convergence except for a denumerable set of starting points, while Katz provides local convergence results for this algorithm. In this paper, we consider a generalized version of the Fermat-Weber problem, where distances are measured by an lp norm and the parameter p takes on a value in the closed interval [1, 2]. This permits the choice of a continuum of distance measures from rectangular (p = 1) to Euclidean (p = 2). An extended version of the Weiszfeld procedure is presented and local convergence results obtained for the generalized problem. Linear asymptotic convergence rates are typically observed. However, in special cases where the optimal solution occurs at a singular point of the iteration functions, this rate can vary from sublinear to quadratic. It is also shown that for sufficiently large values of p exceeding 2, convergence of the Weiszfeld algorithm will not occur in general. [ABSTRACT FROM AUTHOR]- Published
- 1992
- Full Text
- View/download PDF
44. DNA genetic artificial fish swarm constant modulus blind equalization algorithm and its application in medical image processing
- Author
-
Hui Wang, Y C Guo, and B L Zhang
- Subjects
Diagnostic Imaging ,Computer science ,Noise (signal processing) ,Fishes ,Swarm behaviour ,Image processing ,DNA ,General Medicine ,Signal-To-Noise Ratio ,Peak signal-to-noise ratio ,Image (mathematics) ,Local convergence ,Signal-to-noise ratio ,Convergence (routing) ,Image Processing, Computer-Assisted ,Genetics ,Animals ,Humans ,Molecular Biology ,Algorithm ,Algorithms - Abstract
This study proposes use of the DNA genetic artificial fish swarm constant modulus blind equalization algorithm (DNA-G-AFS-CMBEA) to overcome the local convergence of the CMBEA. In this proposed algorithm, after the fusion of the fast convergence of the AFS algorithm and the global search capability of the DNA-G algorithm to drastically optimize the position vector of the artificial fish, the global optimal position vector is obtained and used as the initial optimal weight vector of the CMBEA. The result of application of this improved method in medical image processing demonstrates that the proposed algorithm outperforms the CMBEA and the AFS-CMBEA in removing the noise in a medical image and improving the peak signal to noise ratio.
- Published
- 2015
45. Retinal Microaneurysms Detection using Local Convergence Index Features
- Author
-
Behdad Dashtbozorg, Jiong Zhang, Fan Huang, Bart M. ter Haar Romeny, and Medical Image Analysis
- Subjects
FOS: Computer and information sciences ,retina ,Retinal microaneurysms ,Microaneurysm/diagnostic imaging ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,02 engineering and technology ,SDG 3 – Goede gezondheid en welzijn ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,chemistry.chemical_compound ,0302 clinical medicine ,SDG 3 - Good Health and Well-being ,Image Interpretation, Computer-Assisted ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Computer-Assisted/methods ,Humans ,Image resolution ,Image Interpretation ,cs.CV ,Diabetic Retinopathy/diagnostic imaging ,Microaneurysm ,Blindness ,business.industry ,Image Interpretation, Computer-Assisted/methods ,Retinal ,Pattern recognition ,Diabetic retinopathy ,Computer-aided diagnosis ,medicine.disease ,Computer Graphics and Computer-Aided Design ,local convergence filter ,Local convergence ,diabetic retinopathy ,chemistry ,RGB color model ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Classifier (UML) ,microaneurysm detection ,Software ,Algorithms ,Retinopathy - Abstract
Retinal microaneurysms (MAs) are the earliest clinical sign of diabetic retinopathy disease. Detection of MAs is crucial for the early diagnosis of diabetic retinopathy and prevention of blindness. In this paper, a novel and reliable method for automatic detection of MAs in retinal images is proposed. In the first stage of the proposed method, several preliminary microaneurysm candidates are extracted using a gradient weighting technique and an iterative thresholding approach. In the next stage, in addition to intensity and shape descriptors, a new set of features based on local convergence index filters is extracted for each candidate. Finally, the collective set of features is fed to a hybrid sampling/boosting classifier to discriminate the MAs from non-MAs candidates. The method is evaluated on images with different resolutions and modalities (color and scanning laser ophthalmoscope) using six publicly available data sets including the retinopathy online challenges (ROC) data set. The proposed method achieves an average sensitivity score of 0.471 on the ROC data set outperforming state-of-the-art approaches in an extensive comparison. The experimental results on the other five data sets demonstrate the effectiveness and robustness of the proposed MAs detection method regardless of different image resolutions and modalities.
- Published
- 2017
46. Layered social influence promotes multiculturality in the Axelrod model
- Author
-
Maxi San Miguel, Federico Battiston, Vito Latora, Vincenzo Nicosia, European Commission, Ministerio de Economía y Competitividad (España), and Engineering and Physical Sciences Research Council (UK)
- Subjects
FOS: Computer and information sciences ,LOCAL CONVERGENCE, POPULATIONS, TRANSITION, NETWORKS, CULTURE, CHIMERA, STATES ,Physics - Physics and Society ,media_common.quotation_subject ,Science ,Population ,Culture ,FOS: Physical sciences ,Physics and Society (physics.soc-ph) ,Social Environment ,01 natural sciences ,Article ,010305 fluids & plasmas ,Globalization ,LOCAL CONVERGENCE ,Cultural diversity ,0103 physical sciences ,Humans ,Sociology ,010306 general physics ,education ,media_common ,Social influence ,Social and Information Networks (cs.SI) ,education.field_of_study ,Multidisciplinary ,4. Education ,Social environment ,Computer Science - Social and Information Networks ,Cultural Diversity ,Models, Theoretical ,NETWORKS ,Epistemology ,STATES ,Social system ,POPULATIONS ,Spontaneous mutation ,Medicine ,CHIMERA ,Imitation ,TRANSITION ,Algorithms - Abstract
Despite the presence of increasing pressure towards globalisation, the coexistence of different cultures is a distinctive feature of human societies. However, how multiculturality can emerge in a population of individuals inclined to imitation, and how it remains stable under cultural drift, i.e. the spontaneous mutation of traits in the population, still needs to be understood. To solve such a problem, we propose here a microscopic model of culture dissemination which takes into account that, in real social systems, the interactions are organised in various layers corresponding to different interests or topics. We show that the addition of multiplexity in the modeling of our society generates qualitatively novel dynamical behavior, producing a new stable regime of cultural diversity. This finding suggests that the layered organisation of social influence typical of modern societies is the key ingredient to explain why and how multiculturality emerges and thrives in our world., The authors acknowledge the support of the EU Project LASAGNE, Contract no. 318132 (STREP). V.L. also acknowledges support from EPSRC projects EP/N013492/1. M.S.M. also acknowledges financial support from FEDER (EU) and MINECO (Spain) under Grant ESOTECOS FIS2015-63628-C2-R.
- Published
- 2017
47. Optimal Sixteenth Order Convergent Method Based on Quasi-Hermite Interpolation for Computing Roots
- Author
-
Nawab Hussain, Fiza Zafar, Athar Kharal, and Zirwah Fatimah
- Subjects
Article Subject ,Iterative method ,Computer science ,lcsh:Medicine ,Interval (mathematics) ,lcsh:Technology ,General Biochemistry, Genetics and Molecular Biology ,Hermite interpolation ,Applied mathematics ,Computer Simulation ,lcsh:Science ,General Environmental Science ,lcsh:T ,lcsh:R ,General Medicine ,Function (mathematics) ,Models, Theoretical ,Local convergence ,Nonlinear system ,Rate of convergence ,lcsh:Q ,Algorithm ,Algorithms ,Mathematics ,Research Article ,Interpolation - Abstract
We have given a four-step, multipoint iterative method without memory for solving nonlinear equations. The method is constructed by using quasi-Hermite interpolation and has order of convergence sixteen. As this method requires four function evaluations and one derivative evaluation at each step, it is optimal in the sense of the Kung and Traub conjecture. The comparisons are given with some other newly developed sixteenth-order methods. Interval Newton’s method is also used for finding the enough accurate initial approximations. Some figures show the enclosure of finitely many zeroes of nonlinear equations in an interval. Basins of attractions show the effectiveness of the method.
- Published
- 2014
48. Learning Latent Variable Gaussian Graphical Model for Biomolecular Network with Low Sample Complexity
- Author
-
Bo Yuan, Quan Liu, and Yanbo Wang
- Subjects
Mathematical optimization ,Lung Neoplasms ,Article Subject ,Gaussian ,Matrix norm ,Normal Distribution ,010103 numerical & computational mathematics ,Latent variable ,lcsh:Computer applications to medicine. Medical informatics ,01 natural sciences ,Regularization (mathematics) ,General Biochemistry, Genetics and Molecular Biology ,Pattern Recognition, Automated ,010104 statistics & probability ,symbols.namesake ,Lasso (statistics) ,Neoplasms ,Computer Graphics ,Image Processing, Computer-Assisted ,Humans ,Computer Simulation ,False Positive Reactions ,Gene Regulatory Networks ,Graphical model ,0101 mathematics ,Mathematics ,Cell Proliferation ,Models, Statistical ,General Immunology and Microbiology ,Applied Mathematics ,Estimator ,Bayes Theorem ,General Medicine ,Markov Chains ,Local convergence ,Gene Expression Regulation, Neoplastic ,Modeling and Simulation ,symbols ,lcsh:R858-859.7 ,Glycolysis ,Algorithms ,Software ,Research Article - Abstract
Learning a Gaussian graphical model with latent variables is ill posed when there is insufficient sample complexity, thus having to be appropriately regularized. A common choice is convexl1plus nuclear norm to regularize the searching process. However, the best estimator performance is not always achieved with these additive convex regularizations, especially when the sample complexity is low. In this paper, we consider a concave additive regularization which does not require the strong irrepresentable condition. We use concave regularization to correct the intrinsic estimation biases from Lasso and nuclear penalty as well. We establish the proximity operators for our concave regularizations, respectively, which induces sparsity and low rankness. In addition, we extend our method to also allow the decomposition of fused structure-sparsity plus low rankness, providing a powerful tool for models with temporal information. Specifically, we develop a nontrivial modified alternating direction method of multipliers with at least local convergence. Finally, we use both synthetic and real data to validate the excellence of our method. In the application of reconstructing two-stage cancer networks, “the Warburg effect” can be revealed directly.
- Published
- 2016
49. An Improved Teaching-Learning-Based Optimization with the Social Character of PSO for Global Optimization
- Author
-
Jiangtao Wang, Debao Chen, and Feng Zou
- Subjects
0209 industrial biotechnology ,Social character ,General Computer Science ,Article Subject ,Computer science ,General Mathematics ,Computation ,02 engineering and technology ,lcsh:Computer applications to medicine. Medical informatics ,lcsh:RC321-571 ,020901 industrial engineering & automation ,Position (vector) ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Learning ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,Global optimization ,General Neuroscience ,Teaching ,Process (computing) ,General Medicine ,Class (biology) ,Local convergence ,Benchmark (computing) ,lcsh:R858-859.7 ,020201 artificial intelligence & image processing ,Algorithm ,Algorithms ,Research Article - Abstract
An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO), which is considering the teacher’s behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods.
- Published
- 2016
- Full Text
- View/download PDF
50. Local Convergence Analysis of FastICA and Related Algorithms
- Author
-
Knut Hüper, Hao Shen, and Martin Kleinsteuber
- Subjects
Principal Component Analysis ,Computer Networks and Communications ,General Medicine ,Blind signal separation ,Independent component analysis ,Pattern Recognition, Automated ,Computer Science Applications ,QR decomposition ,Matrix decomposition ,Local convergence ,symbols.namesake ,Artificial Intelligence ,FastICA ,Source separation ,symbols ,Humans ,Neural Networks, Computer ,Algorithm ,Newton's method ,Algorithms ,Software ,Mathematics - Abstract
The FastICA algorithm is one of the most prominent methods to solve the problem of linear independent component analysis (ICA). Although there have been several attempts to prove local convergence properties of FastICA, rigorous analysis is still missing in the community. The major difficulty of analysis is because of the well-known sign-flipping phenomenon of FastICA, which causes the discontinuity of the corresponding FastICA map on the unit sphere. In this paper, by using the concept of principal fiber bundles, FastICA is proven to be locally quadratically convergent to a correct separation. Higher order local convergence properties of FastICA are also investigated in the framework of a scalar shift strategy. Moreover, as a parallelized version of FastICA, the so-called QR FastICA algorithm, which employs the QR decomposition (Gram-Schmidt orthonormalization process) instead of the polar decomposition, is shown to share similar local convergence properties with the original FastICA.
- Published
- 2008
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.