Back to Search
Start Over
On Testing Hypotheses with Divergence Statistics
- Source :
- Soft Methodology and Random Information Systems ISBN: 9783540222644
- Publication Year :
- 2004
- Publisher :
- Springer Berlin Heidelberg, 2004.
-
Abstract
- The idea of using the functionals of information theory, such as the entropy or divergence, in statistical inference is not new. In fact the so-called statistical information theory has been a subject of extensive statistical research over the past years. Minimum divergence estimators have been used successfully in models for continuous and discrete data. The divergence statistics obtained by replacing either one or both parameters by suitable estimators have become very good alternatives to the classical likelihood ratio statistics. Traditional divergence-based test statistics are Pearson’s χ 2, Freeman-Tukey or loglikelihood ratio. The power-divergence family of statistics introduced by Cressie and Read [4] may be used to link the traditional test statistics through a single-valued parameter. In addition, they are monotonously related to the family of divergences introduced by Renyi [10] and modified and extended by Liese and Vajda [5]. A more general family of divergences, which is also used for estimating and testing, is the family of o-divergences introduced independently by Ciszâr [3] and Ali and Silvey [1]. The purpose of this work is to mention a wide class of test statistics based on divergence measures which can be used to tests composite hypotheses in one or several sample problems. They are an alternative to the classical likelihood ratio, Wald’s or Rao’s score test statistics.
Details
- ISBN :
- 978-3-540-22264-4
- ISBNs :
- 9783540222644
- Database :
- OpenAIRE
- Journal :
- Soft Methodology and Random Information Systems ISBN: 9783540222644
- Accession number :
- edsair.doi...........991cfa70062c203e1ca1366c8e7e45e2
- Full Text :
- https://doi.org/10.1007/978-3-540-44465-7_41