263 results
Search Results
2. A Bootstrap Procedure for Adaptive Selection of the Test Statistic in Flexible Two-Stage DesignsThis paper is based on a presentation given at the Workshop 'Frontiers in Adaptive Designs', 17.–18. 5. 2001, Vienna/Austria
- Author
-
Brit Schneider, Tim Friede, and Meinhard Kieser
- Subjects
Statistics and Probability ,Exact test ,Wilcoxon signed-rank test ,Sample size determination ,Likelihood-ratio test ,Ancillary statistic ,Statistics ,Test statistic ,Chi-square test ,General Medicine ,Statistics, Probability and Uncertainty ,Statistic ,Mathematics - Abstract
Adaptive two-stage designs allow a data-driven change of design characteristics during the ongoing trial. One of the available options is an adaptive choice of the test statistic for the second stage of the trial based on the results of the interim analysis. Since there is often only a vague knowledge of the distribution shape of the primary endpoint in the planning phase of a study, a change of the test statistic may then be considered if the data indicate that the assumptions underlying the initial choice of the test are not correct. Collings and Hamilton proposed a bootstrap method for the estimation of the power of the two-sample Wilcoxon test for shift alternatives. We use this approach for the selection of the test statistic. By means of a simulation study, we show that the gain in terms of power may be considerable when the initial assumption about the underlying distribution was wrong, whereas the loss is relatively small when in the first instance the optimal test statistic was chosen. The results also hold true for comparison with a one-stage design. Application of the method is illustrated by a clinical trial example.
- Published
- 2002
3. Estimating the proportion of true null hypotheses when the statistics are discrete
- Author
-
Stefanie R. Austin, Isaac Dialsingh, and Naomi Altman
- Subjects
Statistics and Probability ,Male ,Pan troglodytes ,Iron ,Biochemistry ,Polymorphism, Single Nucleotide ,Statistics ,Null distribution ,Chi-square test ,Econometrics ,Animals ,Humans ,Molecular Biology ,Statistical hypothesis testing ,Mathematics ,Models, Statistical ,Sequence Analysis, RNA ,Gene Expression Profiling ,Muscles ,Estimator ,High-Throughput Nucleotide Sequencing ,Original Papers ,Macaca mulatta ,Computer Science Applications ,Computational Mathematics ,Exact test ,Computational Theory and Mathematics ,Liver ,Ancillary statistic ,Probability distribution ,Regression Analysis ,Cattle ,Null hypothesis ,Statistical Distributions - Abstract
Motivation: In high-dimensional testing problems π0, the proportion of null hypotheses that are true is an important parameter. For discrete test statistics, the P values come from a discrete distribution with finite support and the null distribution may depend on an ancillary statistic such as a table margin that varies among the test statistics. Methods for estimating π0 developed for continuous test statistics, which depend on a uniform or identical null distribution of P values, may not perform well when applied to discrete testing problems. Results: This article introduces a number of π0 estimators, the regression and ‘T’ methods that perform well with discrete test statistics and also assesses how well methods developed for or adapted from continuous tests perform with discrete tests. We demonstrate the usefulness of these estimators in the analysis of high-throughput biological RNA-seq and single-nucleotide polymorphism data. Availability and implementation: implemented in R Contact: nsa1@psu.edu or naomi@psu.edu Supplementary information: Supplementary data are available at Bioinformatics online.
- Published
- 2015
4. Conditional inference of Poisson models and information geometry: an ancillary review
- Author
-
Sei, Tomonari
- Published
- 2024
- Full Text
- View/download PDF
5. Tests for the skewness parameter of two-piece double exponential distribution in the presence of nuisance parameters.
- Author
-
Subramanian, Leela and Dixit, Vaijayanti U
- Subjects
EXPONENTIAL functions ,DENSITY functionals ,LIKELIHOOD ratio tests ,MATHEMATICAL optimization ,MAXIMA & minima ,STATISTICS - Abstract
In this paper, tests for the skewness parameter of the two-piece double exponential distribution are derived when the location parameter is unknown. Classical tests like Neyman structure test and likelihood ratio test (LRT), that are generally used to test hypotheses in the presence of nuisance parameters, are not feasible for this distribution since the exact distributions of the test statistics become very complicated. As an alternative, we identify a set of statistics that are ancillary for the location parameter. When the scale parameter is known, Neyman-Pearson's lemma is used, and when the scale parameter is unknown, the LRT is applied to the joint density function of ancillary statistics, in order to obtain a test for the skewness parameter of the distribution. Test for symmetry of the distribution can be deduced as a special case. It is found that power of the proposed tests for symmetry is only marginally less than the power of corresponding classical optimum tests when the location parameter is known, especially for moderate and large sample sizes. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
6. The conditionality principle in high-dimensional regression.
- Author
-
Azriel, D
- Subjects
CONDITIONED response ,REGRESSION analysis ,MATHEMATICAL statistics ,LEARNING problems ,PROBABILITY theory - Abstract
Consider a high-dimensional linear regression problem, where the number of covariates is larger than the number of observations and the interest is in estimating the conditional variance of the response variable given the covariates. A conditional and an unconditional framework are considered, where conditioning is with respect to the covariates, which are ancillary to the parameter of interest. In recent papers, a consistent estimator was developed in the unconditional framework when the marginal distribution of the covariates is normal with known mean and variance. In the present work, a certain Bayesian hypothesis test is formulated under the conditional framework, and it is shown that the Bayes risk is a constant. This implies that no consistent estimator exists in the conditional framework. However, when the marginal distribution of the covariates is normal, the conditional error of the above consistent estimator converges to zero, with probability converging to one. It follows that even in the conditional setting, information about the marginal distribution of an ancillary statistic may have a significant impact on statistical inference. The practical implication in the context of high-dimensional regression models is that additional observations where only the covariates are given are potentially very useful and should not be ignored. This finding is most relevant to semi-supervised learning problems where covariate information is easy to obtain. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
7. Moving beyond the conventional stratified analysis to estimate an overall treatment efficacy with the data from a comparative randomized clinical study.
- Author
-
Tian, L., Jiang, F., Hasegawa, T., Uno, H., Pfeffer, M., Wei, LJ., and Wei, L J
- Abstract
For a two-group comparative study, a stratified inference procedure is routinely used to estimate an overall group contrast to increase the precision of the simple two-sample estimator. Unfortunately, most commonly used methods including the Cochran-Mantel-Haenszel statistic for a binary outcome and the stratified Cox procedure for the event time endpoint do not serve this purpose well. In fact, these procedures may be worse than their two-sample counterparts even when the observed treatment allocations are imbalanced across strata. Various procedures beyond the conventional stratified methods have been proposed to increase the precision of estimation when the naive estimator is consistent. In this paper, we are interested in the case when the treatment allocation proportions vary markedly across strata. We study the stochastic properties of the two-sample naive estimator conditional on the ancillary statistics, the observed treatment allocation proportions and/or the stratum sizes, and present a biased-adjusted estimator. This adjusted estimator is asymptotically equivalent to the augmentation estimators proposed under the unconditional setting. Moreover, this consistent estimation procedure is also equivalent to a rather simple procedure, which estimates the mean response of each treatment group first via a stratum-size weighted average and then constructs the group contrast estimate. This simple procedure is flexible and readily applicable to any target patient population by choosing appropriate stratum weights. All the proposals are illustrated with the data from a cardiovascular clinical trial, whose treatment allocations are imbalanced. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
8. ROUTES TO HIGHER-ORDER ACCURACY IN PARAMETRIC INFERENCE.
- Author
-
YOUNG, G. ALASTAIR
- Subjects
STATISTICAL sampling ,STATISTICAL bootstrapping ,DISTRIBUTION (Probability theory) ,BAYESIAN analysis ,SIMULATION methods & models - Abstract
Developments in the theory of frequentist parametric inference in recent decades have been driven largely by the desire to achieve higher-order accuracy, in particular distributional approximations that improve on first-order asymptotic theory by one or two orders of magnitude. At the same time, much methodology is specifically designed to respect key principles of parametric inference, in particular conditionality principles. Two main routes to higher-order accuracy have emerged: analytic methods based on ‘small-sample asymptotics’, and simulation, or ‘bootstrap’, approaches. It is argued here that, of these, the simulation methodology provides a simple and effective approach, which nevertheless retains finer inferential components of theory. The paper seeks to track likely developments of parametric inference, in an era dominated by the emergence of methodological problems involving complex dependences and/or high-dimensional parameters that typically exceed available data sample sizes. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
9. Two-Stage Sample Size Re-Estimation Based on a Nuisance Parameter: A Review.
- Author
-
Proschan, Michael A.
- Subjects
STATISTICAL sampling ,ESTIMATION theory ,MATHEMATICAL statistics ,CLINICAL trials ,CLINICAL medicine research ,MATHEMATICS - Abstract
Sample size calculations are important and difficult in clinical trails because they depend on the nuisance parameter and treatment effect. Recently, much attention has been focused on two-stage methods whereby the first stage constitutes an internal pilot study used to estimate parameters and revise the final sample size. This paper reviews two-stage methods based on estimation of nuisance parameters in either a continuous or dichotomous outcome setting. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
10. The likelihood ratio approximation to the conditional distribution of the maximum likelihood estimator in the discrete case.
- Author
-
Severini, Thomas A.
- Subjects
ESTIMATION theory ,DISCRETE choice models ,APPROXIMATION theory ,RANDOM variables ,DISTRIBUTION (Probability theory) - Abstract
The likelihood ratio approximation, also called Barndorff‐Nielsen's approximation and often denoted by p*, provides a highly accurate approximation to the conditional density of a maximum likelihood estimator Θ given an ancillary statistic. In this paper, the properties of p* are considered for the case in which the underlying random variables have a lattice distribution and Θ has a discrete, but not necessarily lattice, distribution. If Θ has a lattice distribution, then p* provides a valid approximation to the density of Θ with respect to counting measure. If the distribution of Θ is non‐lattice, then p* is still a valid approximation to the conditional density of Θ however, the dominating measure is no longer counting measure. [ABSTRACT FROM PUBLISHER]
- Published
- 2000
- Full Text
- View/download PDF
11. Weibull Prediction Limits for a Future Number of Failures Under Parametric Uncertainty
- Author
-
Nechval, Nicholas A., Nechval, Konstantin N., Purgailis, Maris, Ao, Sio-Iong, editor, and Gelman, Len, editor
- Published
- 2013
- Full Text
- View/download PDF
12. Modern Likelihood-Frequentist Inference.
- Author
-
Pierce, Donald Alan and Bellio, Ruggero
- Subjects
INFERENCE (Logic) ,STATISTICS ,COMPUTER software ,APPROXIMATION theory ,JACOBIAN matrices - Abstract
We offer an exposition of modern higher order likelihood inference and introduce software to implement this in a quite general setting. The aim is to make more accessible an important development in statistical theory and practice. The software, implemented in an R package, requires only that the user provide code to compute the likelihood function and to specify extra-likelihood aspects of the model, such as stopping rule or censoring model, through a function generating a dataset under the model. The exposition charts a narrow course through the developments, intending thereby to make these more widely accessible. It includes the likelihood ratio approximation to the distribution of the maximum likelihood estimator, that is the p
∗ formula, and the transformation of this yielding a second-order approximation to the distribution of the signed likelihood ratio test statistic, based on a modified signed likelihood ratio statistic r∗ . This follows developments of Barndorff-Nielsen and others. The software utilises the approximation to required Jacobians as developed by Skovgaard, which is included in the exposition. Several examples of using the software are provided. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
13. Computer-intensive conditional inference.
- Author
-
Young, G. Alastair and DiCiccio, Thomas J.
- Abstract
Conditional inference is a fundamental part of statistical theory. However, exact conditional inference is often awkward, leading to the desire for methods which offer accurate approximations. Such a methodology is provided by small-sample likelihood asymptotics. We argue in this paper that simple, simulation-based methods also offer accurate approximations to exact conditional inference in multiparameter exponential family and ancillary statistic settings. Bootstrap simulation of the marginal distribution of an appropriate statistic provides a conceptually simple and highly effective alternative to analytic procedures of approximate conditional inference. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
14. Fisher and Regression.
- Author
-
Aldrich, John
- Subjects
REGRESSION analysis ,STATISTICAL correlation ,STATISTICS - Abstract
In 1922 R. A. Fisher introduced the modern regression model, synthesizing the regression theory of Pearson and Yule and the least squares theory of Gauss. The innovation was based on Fisher's realization that the distribution associated with the regression coefficient was unaffected by the distribution of X. Subsequently Fisher interpreted the fixed X assumption in terms of his notion of ancillarity. This paper considers these developments against the background of the development of statistical theory in the early twentieth century. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
15. Brown's Paradox in the Estimated Confidence Approach
- Author
-
Wang, Hsiuying
- Published
- 1999
16. The Pearson Score Statistic for Multinomial-Poisson Models
- Author
-
Joseph B. Lang
- Subjects
Statistics and Probability ,Contingency table ,symbols.namesake ,One- and two-tailed tests ,Sampling distribution ,Statistics ,Ancillary statistic ,Pearson's chi-squared test ,symbols ,Test statistic ,Completeness (statistics) ,Sufficient statistic ,Mathematics - Abstract
The score statistic S2 is commonly used for general likelihood-based inference. Pearson’s Chi-squared statistic X2 = ∑(O − E)2/E is ubiquitous in contingency table inference. Because tests and confidence intervals based on S2 have been shown to work well in practice and theory and because X2 has such a simple and intuitively appealing form, it is of interest to know when S2 is identical to X2 and when X2 has an approximate Chi-squared distribution. Toward these ends, this paper gives a simple proof that S2 = X2 for the broad class of multinomial-Poisson distributions when the alternative hypothesis is unrestricted in a certain sense. This paper also gives a sufficient condition under which the null distribution of the Pearson score statistic is approximately Chi-squared. Several examples illustrate the utility of the results and counter-examples highlight the importance of the sufficient conditions of the results.
- Published
- 2014
17. Robust Alternatives to ANCOVA for Estimating the Treatment Effect via a Randomized Comparative Study.
- Author
-
Jiang, Fei, Tian, Lu, Fu, Haoda, Hasegawa, Takahiro, and Wei, L. J.
- Subjects
TREATMENT effectiveness ,ANALYSIS of covariance ,CLINICAL trials ,COMPARATIVE studies ,THERAPEUTICS - Abstract
In comparing two treatments via a randomized clinical trial, the analysis of covariance (ANCOVA) technique is often utilized to estimate an overall treatment effect. The ANCOVA is generally perceived as a more efficient procedure than its simple two sample estimation counterpart. Unfortunately, when the ANCOVA model is nonlinear, the resulting estimator is generally not consistent. Recently, various nonparametric alternatives to the ANCOVA, such as the augmentation methods, have been proposed to estimate the treatment effect by adjusting the covariates. However, the properties of these alternatives have not been studied in the presence of treatment allocation imbalance. In this article, we take a different approach to explore how to improve the precision of the naive two-sample estimate even when the observed distributions of baseline covariates between two groups are dissimilar. Specifically, we derive a bias-adjusted estimation procedure constructed from a conditional inference principle via relevant ancillary statistics from the observed covariates. This estimator is shown to be asymptotically equivalent to an augmentation estimator under the unconditional setting. We utilize the data from a clinical trial for evaluating a combination treatment of cardiovascular diseases to illustrate our findings. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
18. Assessing goodness of fit of generalized linear models to sparse data using higher order moment corrections
- Author
-
Dianliang Deng and Sudhir R. Paul
- Subjects
Statistics and Probability ,Generalized linear model ,Applied Mathematics ,Pearson's chi-squared test ,Deviance (statistics) ,Conditional probability distribution ,symbols.namesake ,Goodness of fit ,Statistics ,Ancillary statistic ,symbols ,Statistics, Probability and Uncertainty ,Completeness (statistics) ,Statistic ,Mathematics - Abstract
The purpose of this paper is to assess goodness of fit properties of the conditional distribution of the Pearson statistic, for noncanonical generalized linear models with data that are extensive but sparse, by Edgeworth approximation of p-values using higher order moment corrections. In this paper we obtain approximations to the fourth moments of the unconditional and conditional distribution of the modified Pearson statistic with noncanonical links. This extends previous results where approximations to the first three moments are available and completes all usual higher order moment calculations of the modified Pearson statistic. We consider the asymptotic limit in which the data are extensive but sparse and a supplementary estimating equation for the dispersion parameter. Specific results for binomial and Poisson data are obtained separately. The methods for assessing goodness of fit using higher order moments are discussed. For testing goodness of fit of generalized linear models to sparse data some simulations are conducted to compare, in terms of empirical size and power, the performance of the classical Pearson statistic (X 2), a standardized modified Pearson statistic ( $X_{\ast}^2$ ), a standardized modified deviance statistic (D ∗ ), a modified Pearson statistic based on Edgeworth approximation with the first three conditional moments (Z 1), and a modified Pearson statistic based on Edgeworth approximation with the first four conditional moments (Z 2). The statistic Z 2 holds level most effectively in most situations and has some power advantage.
- Published
- 2012
19. The limit distribution of the maximum value test statistic in the general case
- Author
-
S. N. Postovalov, Artem Kovalevskii, and Petr Philonenko
- Subjects
05 social sciences ,Monte Carlo method ,01 natural sciences ,010104 statistics & probability ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Distribution function ,0502 economics and business ,Statistics ,Ancillary statistic ,Chi-square test ,Test statistic ,Applied mathematics ,Z-test ,0101 mathematics ,Completeness (statistics) ,050205 econometrics ,Statistical hypothesis testing ,Mathematics - Abstract
The formula of the limit distribution of the maximum value test statistic is represented in the general case in this paper. The formula provides a precise calculation of a p-value in hypothesis testing. Also, the proof is represented in this paper.
- Published
- 2016
20. Loss of information of a statistic for a family of non-regular distributions, II: more general case
- Author
-
Nao Ohyauchi, Hyo Gyeong Kim, and Masafumi Akahira
- Subjects
Statistics and Probability ,PRESS statistic ,Bounded function ,Ancillary statistic ,Statistics ,Statistical parameter ,Applied mathematics ,Extreme value theory ,Completeness (statistics) ,Group family ,Statistic ,Mathematics - Abstract
In the paper of Akahira (Ann Inst Statist Math 48:349–364, 1996), it was shown that the second order asymptotic loss of information in reducing to a statistic consisting of extreme values and an asymptotically ancillary statistic vanished for a family of non-regular distributions whose densities have the same values and the sum of differential coefficients at the endpoints of the bounded support is equal to zero. In this paper, the result can be shown to be extended to the case of a family of non-regular distributions without the above restriction.
- Published
- 2011
21. The Detection of Clusters Using a Spatial Version of the Chi-Square Goodness-of-Fit Statistic
- Author
-
Peter A. Rogerson
- Subjects
Geography, Planning and Development ,Pearson's chi-squared test ,computer.software_genre ,symbols.namesake ,Goodness of fit ,Ancillary statistic ,Statistics ,Spatial clustering ,Test statistic ,Null distribution ,symbols ,Data mining ,Cluster analysis ,computer ,Statistic ,Earth-Surface Processes ,Mathematics - Abstract
A test statistic for the detection of spatial clusters is developed by generalizing the common chi-square goodness-of-fit test. The paper includes a discussion of the relationship between the statistic and other associated statistics, and provides an analysis of both its null distribution and power. The paper concludes with the development of a local version of the statistic and an application to leukemia clustering in central New York.
- Published
- 2010
22. Limiting distribution of the G statistics
- Author
-
Tonglin Zhang
- Subjects
Statistics and Probability ,PRESS statistic ,F-test ,Resampling ,Ancillary statistic ,Statistics ,Test statistic ,Asymptotic distribution ,Statistics, Probability and Uncertainty ,Completeness (statistics) ,Statistic ,Mathematics - Abstract
The G statistic and its local version have been used extensively in spatial data analysis. The paper proves the asymptotic normality of the G statistic. Theorems in this paper imply that the regular permutation test for the G statistic is valid.
- Published
- 2008
23. On the sufficient statistics for multivariate ARMA models: approximate approach
- Author
-
M. Kharrati-Kopaei, Alireza Nematollahi, and Z. Shishebor
- Subjects
Statistics and Probability ,PRESS statistic ,Sample size determination ,Statistics ,Ancillary statistic ,Test statistic ,Statistics, Probability and Uncertainty ,Completeness (statistics) ,Likelihood function ,Sufficient statistic ,Statistic ,Mathematics - Abstract
This paper is an investigation on the sufficient statistic for the parameters of the vector-valued (multivariate) ARMA models, when a finite sample is available. In the simplest case ARMA(1,1), by using the factorization theorem, we present a sufficient statistic whose dimension depends on the sample size and this dimension is even larger than the sample size. In this case and under some restrictions, we have solved this problem and have presented a sufficient statistic whose dimension does not depend on the sample size. In the general case, due to the complexity of the problem, we will use the modified versions of the likelihood function to find an approximate sufficient statistic in terms of the periodogram. The dimension of this sufficient statistic depends on the sample size; however, this dimension is much lower than the sample size.
- Published
- 2007
24. A data-adaptive methodology for finding an optimal weighted generalized Mann–Whitney–Wilcoxon statistic
- Author
-
Carey E. Priebe and Majnu John
- Subjects
Statistics and Probability ,Wilcoxon signed-rank test ,Applied Mathematics ,Pearson's chi-squared test ,Brown–Forsythe test ,Computational Mathematics ,symbols.namesake ,Exact test ,Computational Theory and Mathematics ,F-test ,Statistics ,Ancillary statistic ,symbols ,Test statistic ,Goldfeld–Quandt test ,Mathematics - Abstract
Xie and Priebe [2002. ''Generalizing the Mann-Whitney-Wilcoxon Statistic''. J. Nonparametric Statist. 12, 661-682] introduced the class of weighted generalized Mann-Whitney-Wilcoxon (WGMWW) statistics which contained as special cases the classical Mann-Whitney test statistic and many other nonparametric distribution-free test statistics commonly used for the two-sample testing problem. The two-sample test that they proposed was based on any statistic within the class of WGMWW statistics optimal in the Pitman asymptotic efficacy (PAE) sense. In this paper, among other things, we show via simulation studies that for finite samples the PAE-optimal WGMWW test has substantially higher empirical power compared to the classical Mann-Whitney test for various underlying densities (especially for those densities for which Mann-Whitney test is considered a better alternative to parametric tests such as t-tests). The PAE-optimal WGMWW test is not a candidate for the practitioner's toolbox since the corresponding test statistic contains parameters which are functions of the underlying null distribution function of the samples. The main thrust of this paper is in introducing a data-adaptive alternative to the PAE-optimal WGMWW test, which has efficacy and power as good as the latter. We provide an estimate @[email protected]^ for the PAE function @j of a WGMWW statistic, and our test is based on a @[email protected]^-optimal WGMWW statistic. We prove strong consistency of @[email protected]^, thereby showing that our test has approximately the same efficacy as the @j-optimal WGMWW test for large sample sizes. Via simulation studies we show that for finite samples the empirical power of @[email protected]^-optimal WGMWW test is almost the same as @j-optimal WGMWW test for various underlying densities. We also analyze magnetic imaging data related to subjects with and without Alzheimer's disease to illustrate our methodology. In summary, we present a strong competitor for the classical Mann-Whitney-Wilcoxon test and many other existing nonparametric distribution-free tests, especially for moderate and large samples.
- Published
- 2007
25. THE EXACT CUMULATIVE DISTRIBUTION FUNCTION OF A RATIO OF QUADRATIC FORMS IN NORMAL VARIABLES, WITH APPLICATION TO THE AR(1) MODEL
- Author
-
Giovanni Forchini
- Subjects
Economics and Econometrics ,PRESS statistic ,Likelihood-ratio test ,Cumulative distribution function ,Autocorrelation ,Statistics ,Ancillary statistic ,Applied mathematics ,Completeness (statistics) ,Social Sciences (miscellaneous) ,Sufficient statistic ,Statistic ,Mathematics - Abstract
Often neither the exact density nor the exact cumulative distribution function (c.d.f.) of a statistic of interest is available in the statistics and econometrics literature (e.g., the maximum likelihood estimator of the autocorrelation coefficient in a simple Gaussian AR(1) model with zero start-up value). In other cases the exact c.d.f. of a statistic of interest is very complicated despite the statistic being “simple” (e.g., the circular serial correlation coefficient, or a quadratic form of a vector uniformly distributed over the unit n-sphere). The first part of the paper tries to explain why this is the case by studying the analytic properties of the c.d.f. of a statistic under very general assumptions. Differential geometric considerations show that there can be points where the c.d.f. of a given statistic is not analytic, and such points do not depend on the parameters of the model but only on the properties of the statistic itself. The second part of the paper derives the exact c.d.f. of a ratio of quadratic forms in normal variables, and for the first time a closed form solution is found. These results are then specialized to the maximum likelihood estimator of the autoregressive parameter in a Gaussian AR(1) model with zero start-up value, which is shown to have precisely those properties highlighted in the first part of the paper.
- Published
- 2002
26. Modeling claim exceedances over thresholds
- Author
-
Markos V. Koutras and Michael V. Boutsikas
- Subjects
Statistics and Probability ,Economics and Econometrics ,Scan statistic ,Poisson distribution ,symbols.namesake ,Risk model ,Ancillary statistic ,Statistics ,Econometrics ,symbols ,Financial problem ,Statistics, Probability and Uncertainty ,Completeness (statistics) ,Statistic ,Mathematics - Abstract
In this paper, we consider a simple risk model and study the occurrences of clusters of threshold exceedances by the individual claims. The statistic used to study the model is the discrete multiple scan statistic. A compound Poisson approximation is established and certain asymptotic results are obtained for both the risk model and a similar in nature financial problem. Finally, we review two typical examples from areas of applied science where the outcomes of this paper may have beneficial impact.
- Published
- 2002
27. Generalized bartlett correction
- Author
-
Gauss M. Cordeiro and Silvia Ferrari
- Subjects
Statistics and Probability ,PRESS statistic ,Sampling distribution ,Ancillary statistic ,Statistics ,Applied mathematics ,Asymptotic distribution ,Nonparametric skew ,INFERÊNCIA PARAMÉTRICA ,Pivotal quantity ,Completeness (statistics) ,Sufficient statistic ,Mathematics - Abstract
This paper provides a general method of modifying a statistic of interest in such a way that the distribution of the modified statistic can be approximated by an arbitrary reference distribution to an order of accuracy of O(n -1/2) or even O(n -1). The reference distribution is usually the asymptotic distribution of the original statistic. We prove that the multiplication of the statistic by a suitable stochastic correction improves the asymptotic approximation to its distribution. This paper extends the results of the closely related paper by Cordeiro and Ferrari (1991) to cope with several other statistical tests. The resulting expression for the adjustment factor requires knowledge of the Edgeworth-type expansion to order O(n-1) for the distribution of the unmodified statistic. In practice its functional form involves some derivatives of the reference distribution. Certain difference between the cumulants of appropriate order in n of the unmodified statistic and those of its first-order approximation, ...
- Published
- 1998
28. Weibull Prediction Limits for a Future Number of Failures Under Parametric Uncertainty
- Author
-
Maris Purgailis, Konstantin N. Nechval, and Nicholas A. Nechval
- Subjects
Order statistic ,Ancillary statistic ,Statistics ,Applied mathematics ,Invariant (mathematics) ,Pivotal quantity ,Scale parameter ,Shape parameter ,Weibull distribution ,Parametric statistics ,Mathematics - Abstract
In this paper, we present an accurate procedure, called “within-sample prediction of order statistics,” to obtain prediction limits for the number of failures that will be observed in a future inspection of a sample of units, based only on the results of the first in-service inspection of the same sample. The failure-time of such units is modeled with a two-parameter Weibull distribution indexed by scale and shape parameters β and δ, respectively. It will be noted that in the literature only the case is considered when the scale parameter β is unknown, but the shape parameter δ is known. As a rule, in practice the Weibull shape parameter δ is not known. Instead it is estimated subjectively or from relevant data. Thus, its value is uncertain. This δ uncertainty may contribute greater uncertainty to the construction of prediction limits for a future number of failures. In this paper, we consider the case when both parameters β and δ are unknown. The technique proposed here for constructing prediction limits emphasizes pivotal quantities relevant for obtaining ancillary statistics and represents a special case of the method of invariant embedding of sample statistics into a performance index applicable whenever the statistical problem is invariant under a group of transformations, which acts transitively on the parameter space. Application to other distributions could follow directly.
- Published
- 2012
29. Equivariant Estimation in a Model with an Ancillary Statistic
- Author
-
Kariya, Takeaki
- Published
- 1989
30. Finite-sample instrumental variables inference using an asymptotically pivotal statistic
- Author
-
Paul A. Bekker, Frank Kleibergen, and UvA-Econometrics (ASE, FEB)
- Subjects
Economics and Econometrics ,Instrumental variable ,Inference ,WEAK INSTRUMENTS ,Pivotal quantity ,Exact distribution ,Regression ,Statistics ,Ancillary statistic ,REGRESSION ,Applied mathematics ,Completeness (statistics) ,Social Sciences (miscellaneous) ,Statistic ,Mathematics - Abstract
The paper considers the K-statistic, Kleibergen’s (2000) adaptation ofthe Anderson-Rubin (AR) statistic in instrumental variables regression.Compared to the AR-statistic this K-statistic shows improvedasymptotic efficiency in terms of degrees of freedom in overidentifiedmodels and yet it shares, asymptotically, the pivotal property of theAR statistic. That is, asymptotically it has a chi-square distributionwhether or not the model is identified. This pivotal property is veryrelevant for size distortions in finite-sample tests. Whereas Kleibergen(2000) focuses especially on the asymptotic behavior of the statistic,the present paper concentrates on finite-sample properties in a Gaussianframework. In that case the AR statistic has an F-distribution.However, the K-statistic is not exactly pivotal. Its finite-sample distributionis affected by nuisance parameters. Here we consider the twoextreme cases, which provide tight bounds for the exact distribution.The first case amounts to perfect identification—which is similar tothe asymptotic case—where the statistic has an F-distribution. Inthe other extreme case there is total underidentification. For the lattercase we show how to compute the exact distribution. Thus weprovide tight bounds for exact confidence sets based on the efficientK-statistic. Asymptotically the two bounds converge, except whenthere is a large number of redundant instruments.This paper has resulted in a publication in Econometric Theory, 2003, 19, 744-53.
- Published
- 2001
31. Brown's paradox in the estimated confidence approach
- Author
-
Hsiuying Wang
- Subjects
Statistics and Probability ,coverage function ,62C10 ,Confidence interval ,Coverage probability ,Estimator ,ancillary statistic ,Ancillary statistic ,Statistics ,Statistical inference ,Confidence distribution ,Econometrics ,admissibility ,Point estimation ,Statistics, Probability and Uncertainty ,Statistical theory ,62C15 ,the usual constant coverage probability estimator ,Mathematics - Abstract
A widely held notion of classical conditional theory is that statistical inference in the presence of ancillary statistics should be independent of the distribution of those ancillary statistcs. In this paper, ancillary paradoxes which contradict this notion are presented for two scenarios involving confidence estimation. These results are related to Brown’s ancillary paradox in point estimation. Moreover, the confidence coefficient, the usual constant coverage probability estimator, is shown to be inadmissible for confidence estimation in the multiple regression model with random predictor variables if the dimension of the slope parameters is greater than five. Some estimators better than the confidence coefficient are provided in this paper. These new estimators are constructed based on empirical Bayes estimators.
- Published
- 1999
32. Configural Polysampling
- Author
-
Morgenthaler, Stephan, Friedman, Avner, editor, Miller, Willard, Jr., editor, Stahel, Werner, and Weisberg, Sanford
- Published
- 1991
- Full Text
- View/download PDF
33. Modified stationarity tests with improved power in small samples
- Author
-
Jörg Breitung
- Subjects
Statistics and Probability ,PRESS statistic ,F-test ,KPSS test ,Statistics ,Ancillary statistic ,Test statistic ,Estimator ,Statistics, Probability and Uncertainty ,Completeness (statistics) ,Statistic ,Mathematics - Abstract
In a recent paper Kwiatkowski et al. (1992) propose the so-called KPSS statistic for testing the null hypothesis of stationarity against the alternative of a unit root process. The statistic employs a spectral estimator which can be shown to diverge with increasing sample size, given the alternative is true. Here, we suggest a modified spectral estimator which is shown to stabilize for moving average models. It is shown that this test statistic uniformly outperforms the KPSS statistic in an MA(1) model. Furthermore, a two-step nonparametric correction procedure is suggested, giving a test statistic with similar asymptotic properties as the original KPSS statistic. However, in small samples this correction performs better especially in detecting large random walk components.
- Published
- 1995
34. The conditionality principle in high-dimensional regression
- Author
-
David Azriel
- Subjects
Statistics and Probability ,Applied Mathematics ,General Mathematics ,Mathematics - Statistics Theory ,Regression analysis ,Statistics Theory (math.ST) ,Agricultural and Biological Sciences (miscellaneous) ,Bayes' theorem ,Consistent estimator ,Ancillary statistic ,Covariate ,FOS: Mathematics ,Statistical inference ,Econometrics ,Statistics, Probability and Uncertainty ,Marginal distribution ,General Agricultural and Biological Sciences ,Conditional variance ,Mathematics - Abstract
Summary Consider a high-dimensional linear regression problem, where the number of covariates is larger than the number of observations and the interest is in estimating the conditional variance of the response variable given the covariates. A conditional and an unconditional framework are considered, where conditioning is with respect to the covariates, which are ancillary to the parameter of interest. In recent papers, a consistent estimator was developed in the unconditional framework when the marginal distribution of the covariates is normal with known mean and variance. In the present work, a certain Bayesian hypothesis test is formulated under the conditional framework, and it is shown that the Bayes risk is a constant. This implies that no consistent estimator exists in the conditional framework. However, when the marginal distribution of the covariates is normal, the conditional error of the above consistent estimator converges to zero, with probability converging to one. It follows that even in the conditional setting, information about the marginal distribution of an ancillary statistic may have a significant impact on statistical inference. The practical implication in the context of high-dimensional regression models is that additional observations where only the covariates are given are potentially very useful and should not be ignored. This finding is most relevant to semi-supervised learning problems where covariate information is easy to obtain.
- Published
- 2019
35. Moving beyond the conventional stratified analysis to estimate an overall treatment efficacy with the data from a comparative randomized clinical study
- Author
-
Lu Tian, Marc A. Pfeffer, L. J. Wei, Hajime Uno, Fei Jiang, and Takahiro Hasegawa
- Subjects
Statistics and Probability ,Epidemiology ,Inference ,Mean and predicted response ,Contrast (statistics) ,Estimator ,01 natural sciences ,Treatment and control groups ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Statistics ,Ancillary statistic ,030212 general & internal medicine ,0101 mathematics ,Statistic ,Mathematics ,Event (probability theory) - Abstract
For a two-group comparative study, a stratified inference procedure is routinely used to estimate an overall group contrast to increase the precision of the simple two-sample estimator. Unfortunately, most commonly used methods including the Cochran-Mantel-Haenszel statistic for a binary outcome and the stratified Cox procedure for the event time endpoint do not serve this purpose well. In fact, these procedures may be worse than their two-sample counterparts even when the observed treatment allocations are imbalanced across strata. Various procedures beyond the conventional stratified methods have been proposed to increase the precision of estimation when the naive estimator is consistent. In this paper, we are interested in the case when the treatment allocation proportions vary markedly across strata. We study the stochastic properties of the two-sample naive estimator conditional on the ancillary statistics, the observed treatment allocation proportions and/or the stratum sizes, and present a biased-adjusted estimator. This adjusted estimator is asymptotically equivalent to the augmentation estimators proposed under the unconditional setting. Moreover, this consistent estimation procedure is also equivalent to a rather simple procedure, which estimates the mean response of each treatment group first via a stratum-size weighted average and then constructs the group contrast estimate. This simple procedure is flexible and readily applicable to any target patient population by choosing appropriate stratum weights. All the proposals are illustrated with the data from a cardiovascular clinical trial, whose treatment allocations are imbalanced.
- Published
- 2018
36. An influence statistic for linear measurement error models
- Author
-
Hadi Emami
- Subjects
PRESS statistic ,Applied Mathematics ,05 social sciences ,01 natural sciences ,010104 statistics & probability ,F-test ,Likelihood-ratio test ,0502 economics and business ,Ancillary statistic ,Statistics ,Test statistic ,0101 mathematics ,Completeness (statistics) ,Cook's distance ,Statistic ,050205 econometrics ,Mathematics - Abstract
Detection of multiple outliers or subset of influential points has been rarely considered in the linear measurement error models. In this paper a new influence statistic for one or a set of observations is generalized and characterized based on the corrected likelihood in the linear measurement error models. This influence statistic can be expressed in terms of the residuals and the leverages of linear measurement error regression. Unlike Cook’s statistic, this new measure of influence has asymptotically normal distribution and is able to detect a subset of high leverage outliers which is not identified by Cook’s statistic. As an illustrative example, simulation studies and a real data set are analysed.
- Published
- 2017
37. Testing hypothesis for a simple ordering in incomplete contingency tables
- Author
-
Nian-Sheng Tang, Xuejun Jiang, Hui-Qiong Li, and Guo-Liang Tian
- Subjects
Statistics and Probability ,PRESS statistic ,Applied Mathematics ,05 social sciences ,Pearson's chi-squared test ,Wald test ,01 natural sciences ,010104 statistics & probability ,Computational Mathematics ,symbols.namesake ,Computational Theory and Mathematics ,F-test ,Likelihood-ratio test ,0502 economics and business ,Statistics ,Ancillary statistic ,Econometrics ,symbols ,Test statistic ,0101 mathematics ,Statistic ,050205 econometrics ,Mathematics - Abstract
A test for ordered categorical variables is of considerable importance, because they are frequently encountered in biomedical studies. This paper introduces a simple ordering test approach for the two-way r × c contingency tables with incomplete counts by developing six test statistics, i.e., the likelihood ratio test statistic, score test statistic, global score test statistic, Hausman-Wald test statistic, Wald test statistic and distance-based test statistic. Bootstrap resampling methods are also presented. The performance of the proposed tests is evaluated with respect to their empirical type I error rates and empirical powers. The results show that the likelihood ratio test statistic based on the bootstrap resampling methods perform satisfactorily for small to large sample sizes. A real example from a wheeze study in six cities is used to illustrate the proposed methodologies.
- Published
- 2016
38. Approximation of distributions by using the Anderson Darling statistic
- Author
-
Eckhard Liebscher
- Subjects
Statistics and Probability ,Anderson–Darling test ,Estimator ,Asymptotic distribution ,020206 networking & telecommunications ,02 engineering and technology ,01 natural sciences ,010104 statistics & probability ,Minimum distance estimation ,Sampling distribution ,Statistics ,Ancillary statistic ,0202 electrical engineering, electronic engineering, information engineering ,Applied mathematics ,0101 mathematics ,Completeness (statistics) ,Statistic ,Mathematics - Abstract
In practice, it is often not possible to find an appropriate family of distributions which can be used for fitting the sample distribution with high precision. In these cases, it seems to be opportune to search for the best approximation by a family of distributions instead of an exact fit. In this paper, we consider the Anderson–Darling statistic with plugged-in minimum distance estimator for the parameter vector. We prove asymptotic normality of the Anderson–Darling statistic which is used for a test of goodness of approximation. Moreover, we introduce a measure of discrepancy between the sample distribution and the model class.
- Published
- 2016
39. Testing for seasonal means in time series data
- Author
-
Robert Lund, Jonathan Woody, Q. Shao, and Gang Liu
- Subjects
Statistics and Probability ,Independent and identically distributed random variables ,Series (mathematics) ,Ecological Modeling ,05 social sciences ,050401 social sciences methods ,01 natural sciences ,Order of integration ,010104 statistics & probability ,0504 sociology ,Likelihood-ratio test ,Ancillary statistic ,Statistics ,Time domain ,0101 mathematics ,Time series ,Statistic ,Mathematics - Abstract
The statistician often needs to test whether or not a time series has a seasonal first moment. The problem often arises in environmental series, where most time-ordered data display some type of periodic structure. This paper reviews the problem, proposing new statistics in both the time and frequency domains. Our new time domain statistic has an analysis of variance form that is based on the one-step-ahead prediction errors of the series. This statistic inherits the classic traits of the F-distribution arising in one-way analysis of variance tests, is easy to use, and is asymptotically equivalent to the likelihood ratio test. The statistics asymptotic distribution is quantified when time series parameters are estimated. In the frequency domain, a statistic modifying Fisher's classical test for a sinusoidal mean superimposed on independent and identically distributed Gaussian noise is devised. The performance and comparison of these statistics are studied via simulation. Implementation of the methods merely requires sample means, autocovariances, and periodograms of the series. Application to a data set of monthly temperatures from Tuscaloosa, Alabama, is given. Copyright © 2016 John Wiley & Sons, Ltd.
- Published
- 2016
40. A High-dimensionality-adjusted Consistent Cp-type Statistic for Selecting Variables in a Normality-assumed Linear Regression with Multiple Responses
- Author
-
Hirokazu Yanagihara
- Subjects
multivariate linear regression model ,Proper linear model ,PRESS statistic ,Computer science ,Binomial regression ,multivariate normal distribution ,Design matrix ,Multivariate normal distribution ,Cross-sectional regression ,generalized Cp statistic ,01 natural sciences ,Data matrix (multivariate statistics) ,010104 statistics & probability ,Linear predictor function ,Bayesian multivariate linear regression ,0502 economics and business ,Linear regression ,Statistics ,0101 mathematics ,Statistic ,050205 econometrics ,General Environmental Science ,consistency ,05 social sciences ,Regression analysis ,moderately high-dimensional data ,Linear probability model ,Sample size determination ,Ancillary statistic ,General Earth and Planetary Sciences ,Marginal distribution ,variable selection - Abstract
In this paper, we consider the consistency of Cp-type statistics for selecting variables in a normality-assumed linear regression with multiple responses when the dimension of the vector of the response variables may be large. We propose a new consistent Cp-type statistic for which consistency can be achieved whenever the dimension of the response variables vector is fixed or goes to infinity. A high probability of selecting the true subset of explanatory variables can be expected under a moderate sample size when the proposed Cp-type statistic is used to select variables, even when there is a high-dimensional response variables vector.
- Published
- 2016
41. On the validation of fiducial techniques.
- Author
-
Plante, André
- Subjects
DISTRIBUTION (Probability theory) ,VECTOR spaces ,PROBABILITY theory ,PARAMETER estimation ,PROBABILITY measures - Abstract
Copyright of Canadian Journal of Statistics is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 1979
- Full Text
- View/download PDF
42. MULTIMODALITY p**-FORMULA AND CONFIDENCE REGIONS
- Author
-
Kees Jan van Garderen, Fallaw Sowell, Faculteit Economie en Bedrijfskunde, and UvA-Econometrics (ASE, FEB)
- Subjects
Economics and Econometrics ,05 social sciences ,Inference ,Conditional probability distribution ,Disjoint sets ,01 natural sciences ,Bimodality ,010104 statistics & probability ,0502 economics and business ,Ancillary statistic ,Statistics ,Errors-in-variables models ,0101 mathematics ,Nonlinear regression ,Social Sciences (miscellaneous) ,Statistic ,050205 econometrics ,Mathematics - Abstract
Barndorff-Nielsen’s celebrated p*-formula and variations thereof have amongst their various attractions the ability to approximate bimodal distributions. In this paper we show that in general this requires a crucial adjustment to the basic formula. The adjustment is based on a simple idea and straightforward to implement, yet delivers important improvements. It is based on recognizing that certain outcomes are theoretically impossible and the density of the MLE should then equal zero, rather than the positive density that a straight application of p* would suggest. This has implications for inference and we show how to use the new p**-formula to construct improved confidence regions. These can be disjoint as a consequence of the bimodality. The degree of bimodality depends heavily on the value of an approximate ancillary statistic and conditioning on the observed value of this statistic is therefore desirable. The p**-formula naturally delivers the relevant conditional distribution. We illustrate these results in small and large samples using a simple nonlinear regression model and errors in variables model where the measurement errors in dependent and explanatory variables are correlated and allow for weak proxies.
- Published
- 2018
43. On a stochastic inequality for the wilks statistic
- Author
-
Arjun K. Gupta
- Subjects
Statistics and Probability ,symbols.namesake ,Multivariate analysis of variance ,Likelihood-ratio test ,Statistics ,Ancillary statistic ,Pearson's chi-squared test ,symbols ,Percentage point ,Completeness (statistics) ,Upper and lower bounds ,Statistic ,Mathematics - Abstract
Summary The general nonnull distribution of Wilks statistic, the likelihood ratio statistic in MANOVA, can be expressed as a product of conditional beta variables [3]. Making use of this result, in the present paper, an upper bound for the nonnull distribution of Wilks statistic is obtained, which provides a conservative evaluation of the power of the likelihood ratio test for the cases when the alternative hypothesis is of rank 1, 2 or 3. For p=2, where p is the number of variables, and large f2, the degrees of freedom, it has been shown that the results of this paper give a much better approximation to the power of Wilks statistic than Mikhail's approximation [10]. A few percentage points have also been computed for p=3 and selected values of the degrees of freedom and the noncentrality parameters, which in the linear case have been compared with the exact values obtained by the author [7].
- Published
- 1975
44. Fiducial consistency and group structure
- Author
-
Donald Fraser
- Subjects
Statistics and Probability ,Mathematical optimization ,Location parameter ,Applied Mathematics ,General Mathematics ,Pivotal quantity ,Agricultural and Biological Sciences (miscellaneous) ,Ancillary statistic ,Fiducial inference ,Probability mass function ,Applied mathematics ,Probability distribution ,Statistics, Probability and Uncertainty ,General Agricultural and Biological Sciences ,Statistic ,Sufficient statistic ,Mathematics - Abstract
SUMMARY A consistency criterion is proposed for fiducial probability distributions: if a fiducial density function is used to modulate a prior density function, the resulting frequency function should be that obtained by a Bayesian analysis. For a real variable and a real parameter, the fulfilment of this criterion is equivalent to a condition examined by Lindley (1958) and it requires the parameter to be essentially a location parameter. For several variables and the same number of parameters, the criterion if applied to fiducial probability derived from a pivotal quantity produces a simple differential equation condition on the pivotal quantity. If one-dimensional fiducial distributions can be generated in an arbitrary direction from any point in the parameter space, then the fulfilment of the criterion requires the model to be of transformation-parameter form; for such models a variety of properties concerned with consistency and interpretation are available (Fraser, 1961). The fiducial method of inference was introduced by Fisher (1930) and has been discussed and developed in many of his subsequent papers. The development is primarily in terms of examples, with but little attention to precise governing principles. In this paper the method will be interpreted in terms of the pivotal quantities that are a common feature in the examples. A pivotal quantity is a function of a sufficient statistic and a parameter that has a fixed frequency distribution. For a single real variable and a single real parameter the distribution function is a pivotal quantity and it has the uniform distribution on (0, 1) regardless of the parameter value. In some statistical problems there may be a natural choice for the pivotal quantity based, perhaps on the physical origin of the problem. The quantity may then express a position, a probability or error position for an outcome relative to the parameter value. In other problems there may be a natural choice based on mathematical properties of the specification. In this paper a pivotal quantity will be assumed given as part of the statistical problem: the distribution of the pivotal variable will be interpreted as describing the error distribution or error pattern, and the pivotal equation as describing the way that the probability mass of the error pattern is applied to the sample space. The fiducial method as presented by Fisher involves a preliminary reduction to an exhaustive statistic. If the exhaustive statistic is of the same dimension as the parameter it is used as the basic variable in the pivotal quantity. If the exhaustive statistic is of higher dimension, then an ancillary statistic is needed such that conditionally the exhaustive statistic has the same dimension as the parameter and can be used as basic variable in the pivotal quantity.
- Published
- 1965
45. 'Generalization of the durbin-watson statistic for higher order autoregressive processes
- Author
-
Hrishikesh D. Vinod
- Subjects
Statistics and Probability ,PRESS statistic ,Durbin–Watson statistic ,Ancillary statistic ,Autocorrelation ,Statistics ,Linear model ,Completeness (statistics) ,Beta distribution ,Statistic ,Mathematics - Abstract
The Durbin-Watson statistic is used for testing the first order serial correlation among residuals in a linear model. It is based on the residuals from a corresponding regression analysis. In this paper a generalization of the statistic which tests for higher order dependence among residuals is proposed. The paper gives a brief review of the Durbin-Watson theory and the construction of the corresponding significance tables based on Jacobi orthogonal polynomials and the beta density. The distribution of the Durbin-Watson statistic based on Tmhof's distribution of Quadratic forms is indicated as an alternative method of directly computing the distribution function of the Durbin-Watson statistic, as well as its generalization, Significance values for 5 percent level of significance for lags 2 to 4 are given in Table II.
- Published
- 1973
46. Partitions of Pearson’s Chi-square statistic for frequency tables: a comprehensive account
- Author
-
Yoshio Takane and Sébastien Loisel
- Subjects
Statistics and Probability ,Contingency table ,Pearson's chi-squared test ,010103 numerical & computational mathematics ,010501 environmental sciences ,01 natural sciences ,Computational Mathematics ,symbols.namesake ,Likelihood-ratio test ,Statistics ,Ancillary statistic ,symbols ,0101 mathematics ,Statistics, Probability and Uncertainty ,Statistic ,0105 earth and related environmental sciences ,Mathematics - Abstract
Pearson's Chi-square statistic for frequency tables depends on what is hypothesized as the expected frequencies. Its partitions also depend on the hypothesis. Lancaster (J R Stat Soc B 13:242---249, 1951) proposed ANOVA-like partitions of Pearson's statistic under several representative hypotheses about the expected frequencies. His expositions were, however, not entirely clear. In this paper, we clarify his method of derivations, and extend it to more general situations. A comparison is made with analogous decompositions of the log likelihood ratio statistic associated with log-linear analysis of contingency tables.
- Published
- 2015
47. A Simple Panel Unit-Root Test with Smooth Breaks in the Presence of a Multifactor Error Structure
- Author
-
Chingnun Lee, Lixiong Yang, and Jyh-Lin Wu
- Subjects
Statistics and Probability ,Economics and Econometrics ,05 social sciences ,Asymptotic distribution ,Function (mathematics) ,Regression ,symbols.namesake ,Fourier transform ,Dummy variable ,Unit root test ,Simple (abstract algebra) ,Ancillary statistic ,Statistics ,0502 economics and business ,symbols ,Applied mathematics ,Point (geometry) ,050207 economics ,Statistics, Probability and Uncertainty ,Fourier series ,Social Sciences (miscellaneous) ,Smoothing ,Statistic ,050205 econometrics ,Mathematics - Abstract
This paper proposes a new simple panel unit-root test by extending the cross-sectionally augmented panel unit-root test (CIPS) developed by Pesaran et al. (2013) to allow for smoothing structural changes in deterministic terms, approximated by a Fourier series. The proposed statistic is the simple average of the individual statistics constructed from the breaks and cross-sectional dependence augmented Dickey-Fuller (BCADF) regression and is called the BCIPS statistic. We initially develop the tests by assuming that the number of factors in the model is known and show that the limiting distribution of the BCADF statistic is free of nuisance parameters. The nonstandard limiting distribution of the (truncated) BCIPS statistic is also shown to exist and its critical values are tabulated. Monte-Carlo experiments point out that the sizes and powers of the BCIPS statistic are generally satisfactory as long as T is greater than or equal to fifty and a hundred, respectively. By using two different methods to determine the number of factors, both the BCIPS and CIPS tests are applied to examine the validity of long-run purchasing power parity. The proposed test complements the panel unit-root tests with breaks using dummy variables.
- Published
- 2015
48. A Note on Approximation of Likelihood Ratio Statistic in Exploratory Factor Analysis
- Author
-
Masanori Ichikawa
- Subjects
PRESS statistic ,Likelihood-ratio test ,Statistics ,Ancillary statistic ,Likelihood ratio statistic ,Completeness (statistics) ,Likelihood principle ,Exploratory factor analysis ,Statistic ,Mathematics - Abstract
In normal theory exploratory factor analysis, likelihood ratio (LR) statistic plays an important role in evaluating the goodness-of-fit of the model. In this paper, we derive an approximation of the LR statistic. The approximation is then used to show explicitly that the expectation of the LR statistic agrees with the degrees of freedom of the asymptotic chi-square distribution.
- Published
- 2015
49. On the Optimality of Estimators Based on P-Sufficient Statistics
- Author
-
Bhapkar, Vasant P.
- Published
- 2000
- Full Text
- View/download PDF
50. Testing the Equality of Two Exponential Distributions
- Author
-
Husam Awni Bayoud and Omar A. Kittaneh
- Subjects
Statistics and Probability ,05 social sciences ,Pearson's chi-squared test ,050401 social sciences methods ,Brown–Forsythe test ,01 natural sciences ,010104 statistics & probability ,symbols.namesake ,0504 sociology ,F-test ,Modeling and Simulation ,Likelihood-ratio test ,Statistics ,Ancillary statistic ,Test statistic ,symbols ,0101 mathematics ,Statistic ,Student's t-test ,Mathematics - Abstract
This paper proposes an overlapping-based test statistic for testing the equality of two exponential distributions with different scale and location parameters. The test statistic is defined as the maximum likelihood estimate of the Weitzman's overlapping coefficient, which estimates the agreement of two densities. The proposed test statistic is derived in closed form. Simulated critical points are generated for the proposed test statistic for various sample sizes and significance levels via Monte Carlo Simulations. Statistical powers of the proposed test are computed via simulation studies and compared to those of the existing Log likelihood ratio test.
- Published
- 2014
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.