Back to Search Start Over

A Note on the Application of Davidon's Method to Nonlinear Regression Problems

Authors :
P. Vitale
G. Taylor
Source :
Technometrics. 10:843
Publication Year :
1968
Publisher :
JSTOR, 1968.

Abstract

In the statistical literature, Booth et al. (Ref. 1) and Hartley (Ref. 2) developed modifications of the well-known Gauss-Newton method of iterative solution and applied it to estimate a set of parameters for nonlinear regression problems by least squares. To ensure convergence of his method, Hartley restricts its applicability to problems that satisfy his assumptions. Herein, we describe a method that removes two of Hartley's three assumptions, thereby making it applicable to those problems for which his method fails to converge. The method we propose is based upon an algorithm due to Davidon (Ref. 3). It is an iterative descent method whose rate of convergence is quadratic in the limit. It was later modified by Fletcher and Powell (Ref. 4), and has been used for locating an unconstrained local minimum of a function of several variables. Fletcher and Powell's account of Davidon's method has been found to be very useful when first derivatives of the function are available. However, in realistic situations it frequently is practically impossible to calculate first derivatives. Therefore, the method we describe is a modification of the basic Fletcher and Powell method due to McGill (Ref. 5) and Taylor (Ref. 6) and includes the case for which the gradient is not given analytically. Our concern is with the application and introduction of the modified Davidon method (MDM) to nonlinear regression problems of the type discussed by Hartley. We have omitted all proofs concerning stability and rate of convergence of the method and refer the reader to the excellent paper of Fletcher and Powell for such proofs. Included is a description of the MDM, a description of ways of terminating the iterations, and an illustration of the method with the numerical example Hartley used in his paper. We show the results for both the case of analytic and approximate gradient. Also included are results obtained in applying both methods to a second example, which involves the estimation of six parameters for an exponential regression function. For this example, it is shown that the method described in this note converges when Hartley's method fails to converge.

Details

ISSN :
00401706
Volume :
10
Database :
OpenAIRE
Journal :
Technometrics
Accession number :
edsair.doi...........52cf2e03264e9281fda0dc7b77342a55