Back to Search Start Over

Quasi-Newton Methods

Authors :
Shashi Kant Mishra
Bhagwat Ram
Source :
Introduction to Unconstrained Optimization with R ISBN: 9789811508936
Publication Year :
2019
Publisher :
Springer Singapore, 2019.

Abstract

The Quasi-Newton methods do not compute the Hessian of nonlinear functions. The Hessian is updated by analyzing successive gradient vectors instead. The Quasi-Newton algorithm was first proposed by William C. Davidon, a physicist while working at Argonne National Laboratory, United States in 1959. In Newton’s method, we require to compute the inverse of the Hessian at every iterations which is a very expensive computation. It requires an order n cube effort if n is the size of the Hessian. These drawbacks of Newton’s method gave motivation to develop the Quasi-Newton methods. The idea of Quasi-Newton method is that to approximate the inverse of Hessian by some other matrix which should be positive definite so that we can get a good approximation of the Hessian inverse at a given iteration. This saves the work of computation of second derivatives and also avoids the difficulties associated with the loss of positive definiteness. This approximate matrix is updated on every iteration so that as the search direction proceeds the second-order derivative information improves. In this chapter, we present three important Quasi-Newton methods, and they are Rank One Correction Formula method, Davidon Fletcher Powell method, and Broyden–Fletcher–Goldfarb–Shanno method.

Details

Database :
OpenAIRE
Journal :
Introduction to Unconstrained Optimization with R ISBN: 9789811508936
Accession number :
edsair.doi...........147ab249d7592299d32efda30e307a31
Full Text :
https://doi.org/10.1007/978-981-15-0894-3_9