31 results on '"62C99"'
Search Results
2. Stepwise multiple test procedures and control of directional errors
- Author
-
H. Finner
- Subjects
62C99 ,Statistics and Probability ,multiple hypotheses testing ,variation diminishing property ,totally positive of order 3 ,Closed multiple test procedure ,step-down procedure ,chemistry.chemical_compound ,F-test ,type III error ,familywise error rate ,stepwise multiple test procedure ,unimodality ,62F07 ,SIMes ,62F03 ,step-up procedure ,Scheffé's method ,Mathematics ,multiple comparisons ,Order statistic ,Regression analysis ,Type III error ,Conditional independence ,chemistry ,multiple level of significance ,Multiple comparisons problem ,directional error ,62J15 ,Statistics, Probability and Uncertainty ,closure principle ,Algorithm - Abstract
One of the most difficult problems occurring with stepwise multiple test procedures for a set of two-sided hypotheses is the control of direc-tional errors if rejection of a hypothesis is accomplished with a directional decision. In this paper we generalize a result for so-called step-down procedures derived by Shaffer to a large class of stepwise or closed multiple test procedures. In a unifying way we obtain results for a large class of order statistics procedures including step-down as well as step-up procedures (Hochberg, Rom), but also a procedure of Hommel based on critical values derived by Simes. Our method of proof is also applicable in situations where directional decisions are mainly based on conditionally independent $t$-statistics. A closed $F$-test procedure applicable in regression models with orthogonal design, the modified $S$-method of Scheffé applicable in the Analysis of Variance and Fisher’s LSD-test for the comparison of three means will be considered in more detail.
- Published
- 1999
3. Simultaneous estimation of location parameters for sign-invariant distributions
- Author
-
Jian-Lun Xu
- Subjects
risk function ,62C99 ,Statistics and Probability ,location parameters ,Multivariate random variable ,Estimator ,Simultaneous estimation ,quadratic loss ,Residual ,Combinatorics ,Quadratic equation ,Statistics ,62H12 ,Statistics, Probability and Uncertainty ,Invariant (mathematics) ,sign-invariant distributions ,62F10 ,Mathematics - Abstract
Estimation of $p, p \geq 3$, location parameters of a distribution of a p-dimensional random vector $\mathsf{X}$ is considered under quadratic loss. Explicit estimators which are better than the best invariant one are given for a sign-invariantly distributed random vector $\mathsf{X}$. The results depend only on the second and the third moments of $|| \mathsf{X} - \theta ||$. The generalizations to concave loss functions and to n observations are also considered. Additionally, if the scale is unknown, we investigate the estimators of the location parameters when the observation contains a residual vector.
- Published
- 1997
4. Estimation of a Loss Function for Spherically Symmetric Distributions in the General Linear Model
- Author
-
Martin T. Wells and Dominique Fourdrinier
- Subjects
62C99 ,Statistics and Probability ,Mathematical optimization ,62A99 ,Estimator ,conditional inference ,Trimmed estimator ,shrinkage estimation ,Spherical symmetry ,Efficient estimator ,Minimum-variance unbiased estimator ,Bias of an estimator ,Consistent estimator ,Stein's unbiased risk estimate ,Applied mathematics ,loss estimation ,Statistics, Probability and Uncertainty ,62C15 ,62C05 ,62F35 ,Invariant estimator ,Mathematics - Abstract
This paper is concerned with estimating the loss of a point estimator when sampling from a spherically symmetric distribution. We examine the canonical setting of a general linear model where the dimension of the parameter space is greater than 4 and less than the dimension of the sampling space. We consider two location estimators ― the least squares estimator and a shrinkage estimator ― and we compare their unbiased loss estimator with an improved loss estimator. The domination results are valid for a large class of spherically symmetric distributions and, in so far as the sampling distribution does not need to be precisely specified, the estimates have desirable robustness properties.
- Published
- 1995
5. The Risk Inflation Criterion for Multiple Regression
- Author
-
Dean P. Foster and Edward I. George
- Subjects
62C99 ,Statistics and Probability ,Risk analysis ,minimax ,model selection ,62C20 ,multiple regression ,Model selection ,Decision theory ,g-prior ,Contrast (statistics) ,Feature selection ,Minimax ,62J05 ,Statistics ,Linear regression ,Econometrics ,Statistics, Probability and Uncertainty ,risk ,variable selection ,Mathematics - Abstract
A new criterion is proposed for the evaluation of variable selection procedures in multiple regression. This criterion, which we call the risk inflation, is based on an adjustment to the risk. Essentially, the risk inflation is the maximum increase in risk due to selecting rather than knowing the "correct" predictors. A new variable selection procedure is obtained which, in the case of orthogonal predictors, substantially improves on AIC, $C_p$ and BIC and is close to optimal. In contrast to AIC, $C_p$ and BIC which use dimensionality penalties of 2, 2 and $\log n$, respectively, this new procedure uses a penalty $2 \log p$, where $p$ is the number of available predictors. For the case of nonorthogonal predictors, bounds for the optimal penalty are obtained.
- Published
- 1994
6. Improving on the James-Stein Positive-Part Estimator
- Author
-
Peter Yi-Shi Shao and William E. Strawderman
- Subjects
62C99 ,Statistics and Probability ,location parameters ,James–Stein estimator ,Minimaxity ,Rao–Blackwell theorem ,Combinatorics ,Minimum-variance unbiased estimator ,Efficient estimator ,Bias of an estimator ,Consistent estimator ,Statistics ,Stein's unbiased risk estimate ,squared error loss ,Statistics, Probability and Uncertainty ,62H99 ,62F10 ,Invariant estimator ,Mathematics - Abstract
The purpose of this paper is to give an explicit estimator dominating the positive-part James-Stein rule. The James-Stein estimator improves on the "usual" estimator $X$ of a multivariate normal mean vector $\theta$ if the dimension $p$ of the problem is at least 3. It has been known since at least 1964 that the positive-part version of this estimator improves on the James-Stein estimator. Brown's 1971 results imply that the positive-part version is itself inadmissible although this result was assumed to be true much earlier. Explicit improvements, however, have not previously been found; indeed, 1988 results of Bock and of Brown imply that no estimator dominating the positive-part estimator exists whose unbiased estimator of risk is uniformly smaller than that of the positive-part estimator.
- Published
- 1994
7. A Unified Approach to Improving Equivariant Estimators
- Author
-
Tatsuya Kubokawa
- Subjects
exponential ,62C99 ,Statistics and Probability ,Interval estimation ,James–Stein estimator ,Noncentral chi-squared distribution ,inadmissibility ,simultaneous estimation of multinormal mean ,Estimator ,Multivariate normal distribution ,Point and interval estimation of variance ,James-Stein rule ,normal ,Normal distribution ,Brewster-Zidek estimator ,noncentral chi-square distribution ,Calculus ,Equivariant map ,Applied mathematics ,62F25 ,best affine equivariant estimator ,Statistics, Probability and Uncertainty ,Scale parameter ,62F11 ,Mathematics - Abstract
In the point and interval estimation of the variance of a normal distribution with an unknown mean, the best affine equivariant estimators are dominated by Stein's truncated and Brewster and Zidek's smooth procedures, which are separately derived. This paper gives a unified approach to this problem by using a simple definite integral and provides a class of improved procedures in both point and interval estimation of powers of the scale parameter of normal, lognormal, exponential and Pareto distributions. Finally, the same method is applied to the improvement on the James-Stein rule in the simultaneous estimation of a multinormal mean.
- Published
- 1994
8. Increasing the Confidence in Student's $t$ Interval
- Author
-
Constantinos Goutis and George Casella
- Subjects
Conditional confidence ,62C99 ,Statistics and Probability ,62A99 ,education ,Confidence interval ,Robust confidence intervals ,relevant sets ,Statistics ,Econometrics ,Confidence distribution ,Credible interval ,62F25 ,Tolerance interval ,Statistics, Probability and Uncertainty ,Binomial proportion confidence interval ,CDF-based nonparametric confidence interval ,Population variance ,Mathematics - Abstract
The usual confidence interval, based on Student's t distribution, has conditional confidence that is larger than the nominal confidence level. Although this fact is known, along with the fact that increased conditional confidence can be used to improve a confidence assertion, the confidence assertion of Student's t interval has never been critically examined. We do so here, and construct a confidence estimator that allows uniformly higher confidence in the interval and is closer (than 1 - a) to the indicator of coverage. 1. Introduction and summary. The usual confidence interval for a normal mean, when the population variance is unknown, is based on Student's
- Published
- 1992
9. Non-Existence of an Adaptive Estimator for the Value of an Unknown Probability Density
- Author
-
Mark G. Low
- Subjects
62C99 ,Statistics and Probability ,Mathematical optimization ,Minimax risk ,Estimation theory ,adaptive estimation ,Estimator ,Density estimation ,Delta method ,Bias of an estimator ,density estimation ,Consistent estimator ,Adaptive estimator ,62G07 ,Applied mathematics ,Statistics, Probability and Uncertainty ,Invariant estimator ,Mathematics - Abstract
A strong adaptive criteria is defined for density estimation problems. In a particular case it is shown that there is no strongly adaptive sequence of estimators. In contrast Woodroofe has shown that a weakly adaptive result holds.
- Published
- 1992
10. Shrinkage Domination in a Multivariate Common Mean Problem
- Author
-
Edward I. George
- Subjects
Risk ,62C99 ,Statistics and Probability ,Stein estimators ,Multivariate statistics ,Mean squared error ,Estimator ,Multivariate normal distribution ,robustness ,shrinkage estimation ,Lambda ,Combinatorics ,Linear map ,62J07 ,Common mean ,Linear regression ,Statistics ,62H12 ,Statistics, Probability and Uncertainty ,Mathematics - Abstract
Consider the problem of estimating the $p \times 1$ mean vector $\theta$ under expected squared error loss, based on the observation of two independent multivariate normal vectors $Y_1 \sim N_p(\theta, \sigma^2I)$ and $Y_2 \sim N_p(\theta, \lambda\sigma^2I)$ when $\lambda$ and $\sigma^2$ are unknown. For $p \geq 3$, estimators of the form $\delta_\eta = \eta Y_1 + (1 - \eta)Y_2$ where $\eta$ is a fixed number in (0, 1), are shown to be uniformly dominated in risk by Stein estimators in spite of the fact that independent estimates of scale are unavailable. A consequence of this result is that when $\lambda$ is assumed known, shrinkage domination is robust to incorrect specification of $\lambda$.
- Published
- 1991
11. Information Inequalities for the Bayes Risk
- Author
-
Lawrence D. Brown and Lesław Gajek
- Subjects
62C99 ,Statistics and Probability ,Bayes risk ,Covariance matrix ,Estimator ,Upper and lower bounds ,Numerical integration ,Binomial distribution ,Bayes' theorem ,Quadratic equation ,Exponential family ,exponential family ,Econometrics ,62F15 ,60E15 ,Statistics, Probability and Uncertainty ,Mathematical economics ,62F10 ,Information inequality (Cramer-Rao inequality) ,Mathematics - Abstract
This paper presents lower bounds, derived from the information inequality, for the Bayes risk under scaled quadratic loss. Some numerical results are also presented which give some idea concerning the precision of these bounds. An appendix contains a proof of the information inequality without conditions on the estimator. This result is a direct extension of an earlier result of Fabian and Hannan.
- Published
- 1990
12. Bayes Estimation from a Markov Renewal Process
- Author
-
Michael J. Phelan
- Subjects
62C99 ,Statistics and Probability ,Bayes estimator ,Dirichlet ,Markov chain ,Markov renewal process ,Markov process ,Lévy process ,Dirichlet distribution ,Bayes' theorem ,symbols.namesake ,Beta processes ,62M02 ,symbols ,Econometrics ,Bayes nonparametric estimation ,Applied mathematics ,62G05 ,Statistics, Probability and Uncertainty ,Recursive Bayesian estimation ,Mathematics - Abstract
A procedure for Bayes nonparametric estimation from a Markov renewal process is developed. It is based on a conjugate class of a priori distributions on the parameter space of semi-Markov transition distributions. The class is characterized by a Dirichlet family of distributions for random Markov matrices and a Beta family of Levy processes for random cumulative hazard functions. The main result is the derivation of the posterior law from an observation of the Markov renewal process over a period of time.
- Published
- 1990
13. Estimation of the Inverse Covariance Matrix: Random Mixtures of the Inverse Wishart Matrix and the Identity
- Author
-
L. R. Haff
- Subjects
62C99 ,Statistics and Probability ,Wishart distribution ,Matrix gamma distribution ,Inverse covariance matrix ,Multivariate gamma function ,quadratic loss ,Square matrix ,law.invention ,Combinatorics ,Matrix (mathematics) ,Invertible matrix ,Stokes' theorem ,law ,Centering matrix ,Matrix function ,integration by parts ,Stein-like estimators ,Statistics, Probability and Uncertainty ,62F10 ,Mathematics - Abstract
Let $S_{p \times p}$ have a nonsingular Wishart distribution with unknown matrix $\Sigma$ and $k$ degrees of freedom. For two different loss functions, estimators of $\Sigma^{-1}$ are given which dominate the obvious estimators $aS^{-1}, 0 < a \leqslant k - p - 1$. Our class of estimators $\mathscr{C}$ includes random mixtures of $S^{-1}$ and $I$. A subclass $\mathscr{C}_0 \subset \mathscr{C}$ was given by Haff. Here, we show that any member of $\mathscr{C}_0$ is dominated in $\mathscr{C}$. Some troublesome aspects of the estimation problem are discussed, and the theory is supplemented by simulation results.
- Published
- 1979
14. Minimax Estimation of Location Parameters for Spherically Symmetric Distributions with Concave Loss
- Author
-
Ann Cohen Brandwein and William E. Strawderman
- Subjects
62C99 ,Statistics and Probability ,Location parameter ,Concave function ,Bar (music) ,location parameter ,Minimax ,Symmetric probability distribution ,multivariate ,Quadratic equation ,Minimax estimation ,Applied mathematics ,Statistics, Probability and Uncertainty ,Minimax estimator ,spherically symmetric ,62H99 ,62F10 ,Invariant estimator ,Mathematics - Abstract
For $p \geqslant 4$ and one observation $X$ on a $p$-dimensional spherically symmetric distribution, minimax estimators of $\theta$ whose risks are smaller than the risk of $X$ (the best invariant estimator) are found when the loss is a nondecreasing concave function of quadratic loss. For $n$ observations $X_1, X_2, \cdots, X_n$, we have classes of minimax estimators which are better than the usual procedures, such as the best invariant estimator, $\bar{X}$, or a maximum likelihood estimator.
- Published
- 1980
15. Minimax Estimation of Location Vectors for a Wide Class of Densities
- Author
-
James O. Berger
- Subjects
62C99 ,Statistics and Probability ,completely monotonic ,Location parameter ,Lebesgue measure ,Mathematical analysis ,location parameter ,Monotonic function ,quadratic loss ,Positive-definite matrix ,Minimax ,Minimax approximation algorithm ,Minimax estimators ,Combinatorics ,Matrix (mathematics) ,Statistics, Probability and Uncertainty ,Minimax estimator ,62H99 ,62F10 ,Mathematics - Abstract
Assume $X = (X_1,\cdots, X_p)^t$ has a $p$-variate density, with respect to Lebesgue measure, of the form $f((x - \theta)^t\not\sum^{-1}(x - \theta))$. Here $\not\sum$ is a known positive definite $p \times p$ matrix and $p \geqq 3$. Assume either (i) $f$ is completely monotonic, or (ii) there exist $\alpha > 0$ and $K > 0$ for which $h(s) = f(s)e^{\alpha s}$ is nondecreasing and nonzero if $s > K$. Then for estimating $\theta$ under a known quadratic loss, classes of minimax estimators are found.
- Published
- 1975
16. A General Theory of Asymptotic Consistency for Subset Selection with Applications
- Author
-
Jan F. Bjornstad
- Subjects
62C99 ,Statistics and Probability ,Pointwise ,Class (set theory) ,Mathematical optimization ,consistency ,Selection (relational algebra) ,Decision theory ,decision-theory ,Sigma ,Asymptotic theory (statistics) ,Combinatorics ,asymptotic theory ,Compact space ,Consistency (statistics) ,62F07 ,invariance ,Statistics, Probability and Uncertainty ,Subset selection ,Mathematics - Abstract
The problem of selecting a random nonempty subset from $k$ populatinos, characterized by $\theta_1, \cdots, \theta_k$ with possible nuisance parameters $\sigma$, is considered using a decision-theoretic approach. The concept of asymptotic consistency is defined as the property that the risk of a procedure at $(\theta, \sigma)$ tends to the minimum loss at $(\theta, \sigma)$. Necessary and sufficient conditions for both pointwise and uniform (on compact sets) consistency for permutation-invariant procedures are derived with general loss functions. Various loss functions when the goal is to select populations with $\theta_i$ close to $\max \theta_j$ are considered. Applications are made to normal populations. It is shown that Gupta's procedure is the only procedure in Seal's class that can be consistent. Other Bayes and admissible procedures are also considered.
- Published
- 1984
17. A Finite Memory Test of the Irrationality of the Parameter of a Coin
- Author
-
Thomas M. Cover and Patrick Hirschler
- Subjects
62C99 ,Statistics and Probability ,Sequence ,Rational number ,Null set ,Combinatorics ,Bernoulli's principle ,rationals ,hypothesis testing ,coin ,Statistics, Probability and Uncertainty ,Arithmetic ,Memory test ,Finite set ,Finite memory ,Mathematics - Abstract
Let $X_1,X_2,...$ be a Bernoulli sequence with parameter p. An algorithm $T_{n=1}=\\f(T_n,X_n,n)$; $d_n = d(T_n); \\f:\{1,2,\1dots,8\} \times \{0,1\} \times \{0,1, \1dots}\rightarrow \{1, \1dots, 8\}; d:\{1,2,\dots,8\} \rigtharrow \{H_0,H_1\}$; is found such that $d(T_n)= H_0$ all but a finite number of times with probability one if p is rational, and $d(T_n)= H_1$ all but a finite number of times with probability one if p is irrational (and not in a given null set of irrationals). Thus, an 8-state memory with a time-varying algorithm makes only a finite number of mistakes with probability one on determining the rationality of the parameter of a coin. Thus, determining the rationality of the Bernoulli parameter p does not depend on infinite memory of the data.
- Published
- 1975
18. Estimation of the Correlation Coefficient from a Broken Random Sample
- Author
-
Prem K. Goel and Morris H. DeGroot
- Subjects
bivariate normal distribution ,62C99 ,Statistics and Probability ,estimation ,Fisher information ,Correlation coefficient ,Estimation theory ,Fisher transformation ,Multivariate normal distribution ,Minimum chi-square estimation ,Pearson product-moment correlation coefficient ,Combinatorics ,symbols.namesake ,broken random sample ,Statistics ,symbols ,likelihood function ,Statistics, Probability and Uncertainty ,Likelihood function ,62F10 ,Partial correlation ,62H20 ,Mathematics - Abstract
Inference about the correlation coefficient $\rho$ in a bivariate normal distribution is considered when observations from the distribution are available only in the form of a broken random sample. In other words, a random sample of $n$ pairs is drawn from the distribution but the observed data are only the first components of the $n$ pairs and, separately, some unknown permutation of the second components of the $n$ pairs. Under these conditions, the estimation of $\rho$ is, as Samuel Johnson put it, "like a dog's walking on his hinder legs. It is not done well; but you are surprised to find it done at all." We study the maximum likelihood estimation of $\rho$ and present some effective procedures for estimating the sign of $\rho$.
- Published
- 1980
19. Relation of the Best Invariant Predictor and the Best Unbiased Predictor in Location and Scale Families
- Author
-
Yoshikazu Takada
- Subjects
62C99 ,Statistics and Probability ,Statistics::Theory ,Scale (ratio) ,Invariance ,prediction ,Best linear unbiased prediction ,Invariant (physics) ,Lehmann–Scheffé theorem ,Statistics::Machine Learning ,unbiasedness ,Minimum-variance unbiased estimator ,Bias of an estimator ,Stein's unbiased risk estimate ,Statistics ,Statistics::Methodology ,location and scale family ,Statistics, Probability and Uncertainty ,Scale parameter ,62F99 ,Mathematics - Abstract
In this paper a necessary and sufficient condition for a predictor to be the best invariant predictor in location and scale families is given. Using this condition, it is shown that the best invariant predictor is expressed by a linear combination of the best unbiased predictor and the best unbiased estimator of the scale parameter.
- Published
- 1981
20. Estimation of a Covariance Matrix under Stein's Loss
- Author
-
Cidambi Srinivasan and Dipak K. Dey
- Subjects
62C99 ,Statistics and Probability ,Wishart distribution ,Covariance matrix ,Estimator ,Stein's loss ,Sample (statistics) ,Scale invariance ,minimax estimators ,Statistics ,orthogonally invariant estimators ,Applied mathematics ,Statistics, Probability and Uncertainty ,Invariant (mathematics) ,Minimax estimator ,62F10 ,Eigenvalues and eigenvectors ,Mathematics - Abstract
Stein's general technique for improving upon the best invariant unbiased and minimax estimators of the normal covariance matrix is described. The technique is to obtain solutions to a certain differential inequality involving the eigenvalues of the sample covariance matrix. Several improved estimators are obtained by solving the differential inequality. These estimators shrink or expand the sample eigenvalues depending on their magnitude. A scale invariant, adaptive minimax estimator is also obtained.
- Published
- 1985
21. Empirical Bayes Estimation of a Distribution (Survival) Function from Right Censored Observations
- Author
-
J. Van Ryzin and V. Susarla
- Subjects
62C99 ,Statistics and Probability ,Bayes' rule ,Dirichlet process priors ,Statistics::Theory ,Bayes estimator ,Generalization ,Nonparametric statistics ,Empirical distribution function ,right censored observations ,Dirichlet process ,Empirical Bayes estimation ,Bayes' theorem ,Survival function ,Statistics ,Econometrics ,Statistics::Methodology ,nonparametric estimation of a distribution (survival) function ,62G05 ,Statistics, Probability and Uncertainty ,Mathematics - Abstract
This paper provides an empirical Bayes approach to the problem of nonparametric estimation of a distribution (or survival) function when the observations are censored on the right. The results use the notion of a Dirichlet process prior (Ferguson, 1973, Ann. Stat., 2, 209-230). The paper presents a generalization to the case of right censored observations of the rate result of an empirical Bayes nonparametric estimator of a distribution function of Korwar and Hollander ((1974, Tech. Rept. No. 288, Dept. of Statistics, Florida State University) in the uncensored case. The rate of asymptotic convergence to optimality is shown to be the best obtainable for the problem considered.
- Published
- 1978
22. Minimax Estimation of a Normal Mean Vector for Arbitrary Quadratic Loss and Unknown Covariance Matrix
- Author
-
Lawrence D. Brown, Michael Bock, George Casella, L. Gleser, and James O. Berger
- Subjects
risk function ,62C99 ,Statistics and Probability ,Wishart distribution ,Covariance matrix ,Estimator ,quadratic loss ,Positive-definite matrix ,mean ,Covariance ,normal ,Combinatorics ,Normal distribution ,Wishart ,Distribution (mathematics) ,Statistics ,unknown covariance matrix ,Minimax ,Statistics, Probability and Uncertainty ,62H99 ,Random matrix ,62F10 ,Mathematics - Abstract
Let $X$ be an observation from a $p$-variate normal distribution $(p \geqq 3)$ with mean vector $\theta$ and unknown positive definite covariance matrix $\not\Sigma$. It is desired to estimate $\theta$ under the quadratic loss $L(\delta, \theta, \not\Sigma) = (\delta - \theta)^tQ(\delta - \theta)/\operatorname{tr} (Q\not\Sigma)$, where $Q$ is a known positive definite matrix. Estimators of the following form are considered: $\delta^c(X, W) = (I - c\alpha Q^{-1}W^{-1}/(X^tW^{-1}X))X,$ where $W$ is a $p \times p$ random matrix with a Wishart $(\not\Sigma, n)$ distribution (independent of $X$), $\alpha$ is the minimum characteristic root of $(QW)/(n - p - 1)$ and $c$ is a positive constant. For appropriate values of $c, \delta^c$ is shown to be minimax and better than the usual estimator $\delta^0(X) = X$.
- Published
- 1977
23. Minimax Estimation of a Normal Mean Vector When the Covariance Matrix is Unknown
- Author
-
Leon Jay Gleser
- Subjects
risk function ,62C99 ,Statistics and Probability ,Wishart distribution ,Lebesgue measure ,Multivariate normal distribution ,quadratic loss ,Interval (mathematics) ,Positive-definite matrix ,mean ,Absolute continuity ,Lambda ,normal ,Combinatorics ,Centering matrix ,unknown covariance matrix ,Minimax ,Statistics, Probability and Uncertainty ,62H99 ,62F10 ,Mathematics - Abstract
Let $X$ be an observation from a $p$-variate normal distribution $(p \geqslant 3)$ with mean vector $\theta$ and unknown positive definite covariance matrix $\sum$. We wish to estimate $\theta$ under the quadratic loss $L(\delta; \theta, \sum) = \lbrack\mathrm{tr}(Q\sum)\rbrack^{-1}(\delta - \theta)'Q(\delta - \theta)$, where $Q$ is a known positive definite matrix. Estimators of the following form are considered: $\delta_{k, h}(X, W) = \lbrack I - kh(X'W^{-1}X)\lambda_1(QW/n^\ast)Q^{-1}W^{-1}\rbrack X,$ where $W : p \times p$ is observed independently of $X$ and has a Wishart distribution with $n$ degrees of freedom and parameter $\sum, \lambda_1(A)$ denotes the minimum characteristic root of $A$, and $h(t): \lbrack 0, \infty) \rightarrow \lbrack 0, \infty)$ is absolutely continuous with respect to Lebesgue measure, is nonincreasing, and satisfies the additional requirements that $th(t)$ is nondecreasing and $\sup_{t \geqslant 0}th(t) = 1$. With $h(t) = t^{-1}$, the class $\delta_{k, h}$ specializes to that considered by Berger, Bock, Brown, Casella and Gleser (1977). For the more general class considered in the present paper, it is shown that there is an interval $\lbrack 0, k_{n, p}\rbrack$ of values of $k$ (which may be degenerate for small values of $n - p)$ for which $\delta_{k, h}$ is minimax and dominates the usual estimator $\delta_0 \equiv X$ in risk.
- Published
- 1979
24. A Class of Schur Procedures and Minimax Theory for Subset Selection
- Author
-
Jan F. Bjornstad
- Subjects
62C99 ,Statistics and Probability ,Discrete mathematics ,Statistics::Theory ,Class (set theory) ,education.field_of_study ,location model ,Selection (relational algebra) ,minimax procedures ,Population ,Expected value ,Minimax ,Special class ,Combinatorics ,Schur-concave functions ,62F07 ,26A51 ,Statistics, Probability and Uncertainty ,education ,Constant (mathematics) ,Subset selection ,Mathematics - Abstract
The problem of selecting a random subset of good populations out of $k$ populations is considered. The populations $\Pi_1, \cdots, \Pi_k$ are characterized by the location parameters $\theta_1, \cdots, \theta_k$ and $\Pi_i$ is said to be a good population if $\theta_i > \max_{1 \leq j\leq k}\theta_j - \Delta$, and a bad population if $\theta_i \leq \max_{1 \leq j \leq k} \theta_j - \Delta$, where $\Delta$ is a specified positive constant. A theory for a special class of procedures, called Schur procedures, is developed, and applied to certain minimax problems. Subject to controlling the minimum expected number of good populations selected or the probability that the best population is in the selected subset, procedures are derived which minimize the expected number of bad populations selected or some similar criterion. For normal populations it is known that the classical "maximum-type" procedures has certain minimax properties. In this paper, two other procedures are shown to have several minimax properties. One is the "average-type" procedure. The other procedure has not previously been considered as a serious contender.
- Published
- 1981
25. A Note on the Asymptotic Optimality of the Empirical Bayes Distribution Function
- Author
-
Benjamin Zehnwirth
- Subjects
62C99 ,Statistics and Probability ,Bayes estimator ,Concentration parameter ,Bayes factor ,Empirical Bayes ,Statistics::Computation ,Dirichlet process ,Bayes' theorem ,Generalized Dirichlet distribution ,Categorical distribution ,Econometrics ,Applied mathematics ,62G05 ,Dirichlet-multinomial distribution ,distribution function ,Statistics, Probability and Uncertainty ,Mathematics - Abstract
This paper establishes the asymptotic optimality (in the sense of Robbins) of the empirical Bayes distribution function created from the Bayes rule relative to the Dirichlet process prior with unknown parameter $\alpha(.)$. It will follow that the same result applies to the estimation of the mean of a distribution function.
- Published
- 1981
26. Bayesian Reconstructions of $m, n$-Patterns
- Author
-
Marc Moore
- Subjects
Wald's decision theory ,62C99 ,Statistics and Probability ,Bayes ,Class (set theory) ,Admissible decision rule ,reconstruction rule ,62C10 ,Pattern ,sample points ,Bayesian probability ,measure ,cell ,Measure (mathematics) ,color ,Combinatorics ,Set (abstract data type) ,Bayes' theorem ,Statistics ,62C07 ,Statistics, Probability and Uncertainty ,62F99 ,Unit interval ,Probability measure ,Mathematics - Abstract
The notion of $m, n$-pattern is introduced--namely, a division of the unit interval into at most $n$ cells (intervals or points), each having one of $m$ colors. Given an unknown $m, n$ pattern, it is desired to produce a reconstruction of the pattern using $r \geqq 1$ sample points (fixed or chosen at random) where the color is determined. The problem is studied from a decision-theoretic point of view. A way to obtain all the probability measures on the set of $m, n$-patterns is given. The notion of a Bayesian reconstruction rule (B.R.R.) is introduced. It is proved that when B.R.R.'s are considered, it is sufficient to use certain fixed sample points. A complete class of reconstruction rules is obtained. Finally an example of a B.R.R. is given for 2,2-patterns.
- Published
- 1974
27. Some Empirical Bayes Results in the Case of Component Problems with Varying Sample Sizes for Discrete Exponential Families
- Author
-
Thomas E. O'Bryan
- Subjects
62C99 ,Statistics and Probability ,Sequence ,squared error loss estimation ,Conditional probability distribution ,Decision theory ,Combinatorics ,Asymptotically optimal algorithm ,Exponential family ,Mean integrated squared error ,Sample size determination ,Statistics ,Component (group theory) ,Statistics, Probability and Uncertainty ,Random variable ,empirical Bayes ,62C25 ,Mathematics - Abstract
Consider a modified version of the empirical Bayes decision problem where the component problems in the sequence are not identical in that the sample size may vary. In this case there is not a single Bayes envelope $R(\bullet)$, but rather a sequence of envelopes $R^{m(n)}(\bullet)$ where $m(n)$ is the sample size in the $n$th problem. Let $\mathbf{\theta} = (\theta_1, \theta_2, \cdots)$ be a sequence of i.i.d. $G$ random variables and let the conditional distribution of the observations $\mathbf{X}_n = (X_{n,1}, \cdots, X_{n,m(n)})$ given $\mathbf{\theta}$ be $(P_{\theta_n})^{m(n)}, n = 1, 2, \cdots$. For a decision concerning $\theta_{n+1}$, where $\theta$ indexes a certain discrete exponential family, procedures $t_n$ are investigated which will utilize all the data $\mathbf{X}_1, \mathbf{X}_2, \cdots, \mathbf{X}_{n+1}$ and which, under certain conditions, are asymptotically optimal in the sense that $E|t_n - \theta_{n+1}|^2 - R^{m(n+1)}(G) \rightarrow 0$ as $n \rightarrow \infty$ for all $G$.
- Published
- 1976
28. Empirical Bayes Estimation of a Distribution Function
- Author
-
Myles Hollander and Ramesh M. Korwar
- Subjects
62C99 ,Statistics and Probability ,Bayes estimator ,empirical Bayes estimator ,Bayes factor ,Empirical distribution function ,Dirichlet process ,Bayes' theorem ,Generalized Dirichlet distribution ,Statistics ,Bayes error rate ,Distribution function ,62G05 ,Dirichlet-multinomial distribution ,Statistics, Probability and Uncertainty ,Mathematics - Abstract
A sequence of empirical Bayes estimators is defined for estimating a distribution function. The sequence is shown to be asymptotically optimal relative to a Ferguson Dirichlet process prior. Exact risk expressions are derived and the rate, at which the overall expected loss approaches the minimum Bayes risk, is exhibited. The empirical Bayes approach, based on the Dirichlet process, is also applied to the problem of estimating the mean of a distribution.
- Published
- 1976
29. Simultaneous Estimation of Location Parameters Under Quadratic Loss
- Author
-
Nobuo Shinozaki
- Subjects
risk function ,62C99 ,Statistics and Probability ,Mathematical optimization ,Location parameter ,Double exponential function ,Estimator ,location parameter ,Simultaneous estimation ,quadratic loss ,Risk function ,Quadratic equation ,Efficient estimator ,best invariant estimator ,Applied mathematics ,Integration by parts ,62H12 ,Statistics, Probability and Uncertainty ,Invariant (mathematics) ,62F10 ,Mathematics - Abstract
Simultaneous estimation of $p(p \geq 3)$ location parameters are considered under quadratic loss. Explicit estimators which dominate the best invariant one are given mainly when coordinates of the best invariant one are independently, identically and symmetrically distributed. Effectiveness of integration by parts in evaluating the risk function of the dominating estimator is shown for three typical continuous distributions (uniform, double exponential and $t$). Further explicit dominating estimators are given in terms of second and fourth moments of the best invariant one.
- Published
- 1984
30. Consistency in the Location Model: The Undominated Case
- Author
-
Albert Y. Lo
- Subjects
62C99 ,Statistics and Probability ,Location model ,Strong consistency ,Statistics::Computation ,Pitman estimate ,Bayes method ,Consistency (statistics) ,Statistics ,Econometrics ,strong consistency ,Statistics, Probability and Uncertainty ,Mathematics - Abstract
Consistency in the undominated location model is investigated from a Bayesian point of view and a proof on the consistency of the Bayes procedures with respect to the invariant prior is provided. The consistency of Bayes procedures with respect to other prior measures is established as a corollary.
- Published
- 1984
31. Best Invariant Estimation of a Distribution Function under the Kolmogorov-Smirnov Loss Function
- Author
-
Alexander Gelman, Yaakov Friedman, and Eswar Phadia
- Subjects
62C99 ,Statistics and Probability ,Mathematical optimization ,Invariant ,Half-normal distribution ,Characteristic function (probability theory) ,Estimator ,nonparametric estimation ,Kolmogorov-Smirnov loss ,Kolmogorov–Smirnov test ,Empirical distribution function ,symbols.namesake ,Error function ,monotone transformations ,Step function ,symbols ,Applied mathematics ,62G05 ,Statistics, Probability and Uncertainty ,Invariant estimator ,Mathematics - Abstract
Given a random sample of size $n$ from an unknown continuous distribution function $F$, we consider the problem of estimating $F$ nonparametrically from a decision theoretic approach. In our treatment, we assume the Kolmogorov-Smirnov loss function and the group of all one-to-one monotone transformations of real numbers onto themselves which leave the sample values invariant. Under this setup, we obtain a best invariant estimator of $F$ which is shown to be unique. This estimator is a step function with unequal amounts of jumps at the observations and is an improper distribution function. It is remarked that this estimator may be used in constructing the best invariant confidence bands for $F$, and also in carrying out a goodness-of-fit test.
- Published
- 1988
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.