40 results
Search Results
2. Comments on Kurtz-Link-Tukey-Wallace Paper.
- Author
-
Anscombe, F.J.
- Subjects
ERROR analysis in mathematics ,CONFIDENCE intervals ,MULTIPLE comparisons (Statistics) - Abstract
Comments on a paper by T.E. Kurtz, R.F. Link, J.W. Tukey and D.L. Wallace titled 'Short-Cut Multiple Comparisons for Balanced Single and Double Classifications: Part 1, Results,' which appeared in the May 1965 issue of 'Technometrics.' Basic notions of error rates; Simultaneous confidence intervals; Multiple comparison problems.
- Published
- 1965
- Full Text
- View/download PDF
3. Discussion of Toward Experimental Criteria for Judging Disclosure Improvement.
- Author
-
McDonald, Daniel L.
- Subjects
DISCLOSURE in accounting ,MARKET value ,MARKET prices ,CONFIDENCE intervals ,ACCOUNTING ,FINANCIAL disclosure - Abstract
Empirical work inevitably forces us to consider whether and with what revisions the work should be replicated. Professor Stallman's paper suggests many intriguing extensions, but let me single out three which particularly interest me. 1. The relationship between predictive capacity and willingness to depart from fictitious market values needs to be explored. Hopefully these two criteria will not be conflicting. If so, one may be operationally easier to apply and can serve as a proxy for the other. If not, a conceptual choice must be made. 2. Any replication should attempt to avoid possible bias from the manner in which the quoted market prices are determined. Perhaps this can only be done by taking data from a real multi-industry firm rather than constructing a fictitious one. 3. Confidence has been used in only the sense of willingness to differ from others (depart from a fictitious market price). If replications ask for point estimates as well as some confidence interval (e.g., 50%) about the point estimate, the dispersion View of confidence can also be tested. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
4. CAPITAL EXPENDITURE PROGRAMMING AND SOME ALTERNATIVE APPROACHES TO RISK.
- Author
-
Peterson, D. E. and Laughhunn, D. J.
- Subjects
CAPITAL investments ,MATHEMATICAL programming ,DECISION making ,BUDGET ,ANALYSIS of variance ,UTILITY functions ,DECISION theory ,CAPITAL budget ,CONFIDENCE intervals ,RISK management in business ,BUSINESS losses ,PROBLEM solving - Abstract
This paper investigates the potential reduction in decision-making effort in capital budgeting problems obtainable through the use of measures of risk in addition to variance. Specific measures of risk treated are Baumol's lower confidence limit and the maximum probability of loss. The primary purpose of the paper is to present a methodology which imposes certain "constraining relations" on acceptable investment programs rather than one which appeals to a specific utility function as the basis for ordering choices. In this connection a discussion of several different utility functions is presented, along with an analysis of their usefulness when the probability distributions of net present values for various investment portfolios cannot be taken as known. In addition, some of the logical problems involved in constructing a utility function are examined. [ABSTRACT FROM AUTHOR]
- Published
- 1971
- Full Text
- View/download PDF
5. STATISTICAL ANALYSIS FOR QUEUEING SIMULATIONS.
- Author
-
Fishman, George S.
- Subjects
QUEUING theory ,SYSTEM analysis ,COMPUTER simulation ,EXPERIMENTAL design ,STOCHASTIC process software ,CONFIDENCE intervals ,STATISTICAL sampling ,VARIANCES ,SIMULATION methods & models - Abstract
This paper presents a method for estimating the variances of sample performance measures in queueing simulations, for removing the bias in these sample measures due to initial conditions and for deriving approximating confidence intervals for the true performance measures. It also describes how a variance reduction can be achieved using the method when comparing performance measures for two queue disciplines. The approach requires only that every time the system passes through the empty and idle condition future behavior is independent of past behavior. [ABSTRACT FROM AUTHOR]
- Published
- 1973
- Full Text
- View/download PDF
6. A NOTE ON QUADRATIC PROGRAMMING IN ACTIVATION ANALYSIS.
- Author
-
Smith, Lee H.
- Subjects
QUADRATIC programming ,COST analysis ,CONFIDENCE intervals ,NUCLEAR activation analysis ,STATISTICAL sampling ,NONLINEAR programming ,ESTIMATION theory - Abstract
Various statistical techniques have been employed in 'activation analysis' in order to provide a better means of estimating the amounts of various pure chemical elements contained in an unknown mixture. In particular, the method of least squares has been employed extensively. However, for the most part, the usual least squares applications in activation analysis have utilized the ordinary matrix model Y=XΒ+e, under the 'error' assumptions (a) zero means, (b) variances proportional to Y, and (c) zero covariances. In addition to the fact that assumptions (b) and (c) may lead to erroneous results, the usual applications allow only point estimation, with no provision for confidence intervals and tests for model goodness of fit. Further, the usual applications fail to eliminate the drawback that negative coefficients are sometimes obtained. The present paper sets forth an iterative quadratic programming estimation procedure that not only eliminates the necessity for assumptions (b) and (c), but also alleviates the other above mentioned difficulties. [ABSTRACT FROM AUTHOR]
- Published
- 1970
- Full Text
- View/download PDF
7. ESTIMATION OF MULTIPLE CONTRASTS USING t-DISTRIBUTIONS.
- Author
-
Dunn, Olive Jean and Massey Jr, Frank J.
- Subjects
- *
TIME series analysis , *CHARACTERISTIC functions , *MATHEMATICAL statistics , *PROBABILITY theory , *CONFIDENCE intervals , *DISTRIBUTION (Probability theory) , *MATHEMATICAL models , *STATISTICAL sampling , *MULTIVARIATE analysis , *STATISTICS - Abstract
Various methods based on Student t variates have been suggested and used for obtaining simultaneous confidence intervals for several means, or for several contrasts among means. Determination of an overall confidence level for such intervals involves evaluating the probability mass of a multivariate t distribution over a hypercube centered at the origin, with sides paralleling the coordinate planes, or obtaining bounds for this probability mass. Since such distributions involve many nuisance parameters, an impossible number of tables would be necessary in order to make exact confidence intervals. In the virtual absence of tables, approximations and bounds become important. In this paper, an attempt has been made to investigate the adequacy of certain suggested approximations [2], [5], [8] by computing the exact distributions for some particular cases. These exact distributions have been compared with approximations. This paper is concerned with two-sided confidence intervals, rather than one-sided intervals. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
8. BETTER ESTIMATES OF CONFIDENCE INTERVALS FOR VERY LOW ERROR RATE POPULATION.
- Author
-
Birnberg, Jacob G. and Pratt, Robert J. A.
- Subjects
POPULATION ,ESTIMATION theory ,GRAPHIC methods ,POPULATION research ,PROBABILITY theory ,ERRORS ,CONFIDENCE intervals ,STATISTICAL sampling ,STATISTICS ,BOUNDARY element methods ,BOUNDARY value problems ,POPULATION forecasting - Abstract
In the estimation of error rates for populations, whose error rate is quite small, the usual assumption of normality can lead to erroneous results. The confidence interval that is actually calculated for any given probability will have too low a lower bound, and not a high enough upper bound. Thus, the user may be misled into too optimistic a view of the population being sampled. This article discusses the nature of the problem and provides graphs from which the more accurate interval can be read. The final section of the paper deals with the related problem of confidence intervals when the observed error rate is zero. Tables are provided which facilitates the developing of confidence statements for such samples. [ABSTRACT FROM AUTHOR]
- Published
- 1966
- Full Text
- View/download PDF
9. A Conservative Confidence Interval for a Likelihood Ratio.
- Author
-
Mitchell, Ann F. S. and Payne, Clive D.
- Subjects
- *
ANALYSIS of variance , *GAUSSIAN distribution , *CONFIDENCE intervals , *PARAMETER estimation , *RATIO analysis , *SIMULATION methods & models , *STATISTICAL sampling , *RATIO measurement , *STATISTICS - Abstract
A method is described for assigning an observation to one of two normal populations with differing, unknown means and differing, unknown variances. The classification procedure rests on the likelihood ratio, which, for a given observation, is a function of four unknown parameters. Sample information is used to obtain a confidence region for these parameters. From this confidence region, a conservative confidence interval for the likelihood ratio is derived. Interpreting the likelihood ratio as a measure of the odds in favor of each population as the source of the observation, the interval can be used, in an obvious manner, for classification purposes. Simulation techniques are employed to examine the conservative nature of the interval Finally, as an illustration of the method, the results are applied to the determination of authorship of the disputed Federalist papers. [ABSTRACT FROM AUTHOR]
- Published
- 1971
- Full Text
- View/download PDF
10. SHORTER CONFIDENCE INTERVALS USING PRIOR OBSERVATIONS.
- Author
-
Deely, J. J. and Zimmer, W. J.
- Subjects
- *
MATHEMATICAL models , *CONFIDENCE intervals , *STATISTICAL hypothesis testing , *VARIANCES , *ESTIMATES , *STATISTICAL sampling - Abstract
The purpose of this paper is to make the reader aware of the applicability and advantage of a particular mathematical model. The application is typified by an example and the advantage is via confidence intervals; that is, shorter confidence intervals are possible using the model than if one ignores it, providing the applicability is valid. It is also shown that an improved estimate can be obtained through use of the model. Let f(y|mu, sigma) be a normal density with mean mu and variance sigma[sup 2] and let g(mu|lambda, beta) be a normal density with mean lambda and variance beta[sup 2]. A sequence y[sub 1], y[sub 2],..., y[sub n+1] of independent observations from the mixture off and g can be considered as follows: An unobservable mu [sub i] is first drawn from g(mu|lambda, beta) and then y[sub i] which can be observed is drawn from f(y|mu[sub i], sigma). Confidence intervals on mu[sub n+1] are obtained which are based on the observations y[sub 1],..., y[sub n+1] and which are shorter than the standard interval based on y[sub n+1] only for any n. Shorter intervals are obtained for two cases: (i) lambda unknown, sigma, beta known; (ii) only sigma/beta = c known. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
11. EXACT THREE-ORDER-STATISTIC CONFIDENCE BOUNDS ON RELIABLE LIFE FOR A WEIBULL MODEL WITH PROGRESSIVE CENSORING.
- Author
-
Mann, Nancy R.
- Subjects
- *
WEIBULL distribution , *CONFIDENCE intervals , *ORDER statistics , *STATISTICAL sampling , *DISTRIBUTION (Probability theory) , *FAILURE time data analysis , *MATHEMATICAL statistics - Abstract
A progressive-censoring model arises from a life test of a sample of items in which one or more of the survivors may be removed from the test at the time of any failure. Such a model is often more realistic for actual failure data which must be analyzed by a statistician than one in which all survivors are assumed to be removed from test simultaneously. This paper deals with the situation in which the underlying failure-time distribution for the population sampled is the two-parameter Weibull distribution. The reliable life for the population is defined to be the 100(1-R) percent point of the failure-time distribution, where R is a specified population survival proportion, or reliability. An exact confidence bound on reliable life based on three observed ordered failure times is derived for this progressive-censoring model. The criterion used for selecting the order numbers of the three failure times upon which the bound is based depends upon computed values of the power function of the test associated with the bound. A table from which lower bounds can be obtained is given for R equal to .95, confidence level .90, sample size equal to 2, 3, . . . , 6, and all possible censorings. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
12. ESTIMATES OF THE REGRESSION COEFFICIENT BASED IN KENDALL'S TAU.
- Author
-
Sen, Pranab Kumar
- Subjects
- *
LEAST squares , *REGRESSION analysis , *STATISTICAL correlation , *CONFIDENCE intervals , *ESTIMATION theory , *STATISTICS , *NONPARAMETRIC statistics , *MATHEMATICAL statistics - Abstract
The least squares estimator of a regression coefficient beta is vulnerable to gross errors and the associated confidence interval is, in addition, sensitive to non-normality of the parent distribution. In this paper, a simple and robust (point as well as interval) estimator of beta based on Kendall's [6] rank correlation tau is studied. The point estimator is the median of the set of slopes (Y[sub I]--Y[sub I])/(t[sub I]--t[sub I]) joining pairs of points with t[sub 1] is not equal to t[sub 1], and is unbiased. The confidence interval is also determined by two order statistics of this set of slopes. Various properties of these estimators are studied and compared with those of the least squares and some other nonparametric estimators. [ABSTRACT FROM AUTHOR]
- Published
- 1968
- Full Text
- View/download PDF
13. STRAIGHT LINE CONFIDENCE REGIONS FOR LINEAR MODELS.
- Author
-
Folks, John Leroy and Antle, Charles E.
- Subjects
- *
CONFIDENCE intervals , *MATHEMATICAL statistics , *LINEAR statistical models , *MATHEMATICAL models , *STATISTICS , *PARAMETERS (Statistics) - Abstract
This paper presents a method of constructing conservative joint confidence regions for the parameters of a linear model and the volumes of these regions are compared with the volume of an exact confidence region. Confidence regions on the entire surface are obtained from the confidence region on the parameters. The procedures are illustrated for the case of a simple linear model. [ABSTRACT FROM AUTHOR]
- Published
- 1967
- Full Text
- View/download PDF
14. CONFIDENCE LIMITS FOR THE RELIABILITY OF SERIES SYSTEMS.
- Author
-
El Mawaziny, A. H. and Buehler, R. J.
- Subjects
- *
CONFIDENCE intervals , *CONFIDENCE , *PROBABILITY theory , *EXPONENTIAL families (Statistics) , *APPROXIMATION theory , *FUNCTIONAL analysis , *POLYNOMIALS - Abstract
It is desired to set confidence limits for the probability of successful operation at least until time x[sub 0] of a series system of k dissimilar components. The components follow exponential failure laws, and data are available on failure times of components of each type. Exact confidence limits for k=2 have previously been given by Lentner and Buehler. The present paper deals with a large-sample approximation to the exact solution for arbitrary k. [ABSTRACT FROM AUTHOR]
- Published
- 1967
- Full Text
- View/download PDF
15. WILCOXON CONFIDENCE INTERVALS FOR LOCATION PARAMETERS IN THE DISCRETE CASE.
- Author
-
Noether, Gottfried E.
- Subjects
- *
CONFIDENCE intervals , *STATISTICAL sampling , *FRACTIONAL parentage coefficients , *LEAST squares , *HYPOTHESIS - Abstract
The paper surveys results about the behavior of nonparametric methods in cases where the customary continuity assumptions are not satisfied. A projection approach is used to show that true confidence coefficients associated with confidence intervals for appropriate location parameters derived from Wilcoxon one- and two-sample tests are at least equal to the nominal level for the continuous case, if the confidence intervals are considered closed, at most equal to the nominal level, if they are considered open. The results are interpreted in terms of tests of hypotheses. [ABSTRACT FROM AUTHOR]
- Published
- 1967
- Full Text
- View/download PDF
16. CONFIDENCE, PREDICTION, AND TOLERANCE REGIONS FOR THE MULTIVARIATE NORMAL DISTRIBUTION.
- Author
-
Chew, Victor
- Subjects
- *
CONFIDENCE intervals , *GAUSSIAN distribution , *PREDICTION theory , *DISTRIBUTION (Probability theory) , *STATISTICAL tolerance regions , *MULTIVARIATE analysis , *ANALYSIS of covariance , *MATRICES (Mathematics) , *VECTOR analysis - Abstract
Formulas for confidence, prediction, and tolerance regions for the multivariate normal distribution for the various cases of known and unknown mean vector and covariance matrix are assembled for easy reference in this expository paper. Tables are provided for the bivariate case. [ABSTRACT FROM AUTHOR]
- Published
- 1966
- Full Text
- View/download PDF
17. SOME GRAPHS USEFUL FOR STATISTICAL INFERENCE.
- Author
-
Guenther, William C. and Thomas, P. O.
- Subjects
- *
GRAPHIC methods , *GRAPHIC methods in statistics , *STATISTICAL sampling , *CONFIDENCE intervals , *HYPOTHESIS , *SAMPLE size (Statistics) , *PROBABILITY theory , *STATISTICS , *MATHEMATICAL statistics - Abstract
The determination of sample size to meet certain probability requirements is a problem which often faces an experimenter when obtaining a confidence interval or testing a hypothesis. This paper includes some new graphs useful in selecting sample size and gives a reference to a new textbook which includes a number of other similar type graphs. The method of construction is explained in detail and examples are included. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
18. Uniform Confidence Bands for a Quadratic Model.
- Author
-
Trout, J. Richard and Chow, Bryant
- Subjects
CONFIDENCE intervals ,REGRESSION analysis - Abstract
In this paper uniform confidence bands are developed for a quadratic regression model. Tables are supplied to determine a uniform confidence band for the quadratic model in which it is assumed the sum of the cubed deviations of the independent variable about its mean is equal to zero. The uniform confidence band is then compared with the classical competitor, the Scheffé confidence band. A table summarizing the results of this comparison is presented. Finally, an example is examined. [ABSTRACT FROM AUTHOR]
- Published
- 1973
- Full Text
- View/download PDF
19. Comparison of Approximate Confidence Intervals for the Exponential Scale Parameter from Sample Quantiles.
- Author
-
Kaminsky, Kenneth S.
- Subjects
CONFIDENCE intervals ,CHI-squared test ,ESTIMATION theory - Abstract
Several authors have considered point and interval estimation of the exponential scale parameter, σ, on the basis of subsets of the order statistics. In this paper, we suggest a procedure for finding approximate confidence intervals for σ in large samples, using k sample quantiles of a random sample of size n. The procedure is based on a simple chi-square approximation to the distribution of the asymptotically best linear estimate of σ. We compare this procedure with the one given by Ogawa (1962) based on an approximating t-distribution. We find that the interval based on the chi-square approximarion is easier to calculate, and performs better when k is small and n is large. [ABSTRACT FROM AUTHOR]
- Published
- 1973
- Full Text
- View/download PDF
20. Approximate Confidence Limits for the Reliability of Series and Parallel Systems.
- Author
-
Madansky, Albert
- Subjects
CONFIDENCE intervals ,ACCELERATED life testing - Abstract
Suppose a complex mechanism, e.g., a missile, is built up from a number of different types of components, where the reliability of each of the components has been estimated by means of separate tests on each of the components. This paper gives a method for combining such data to determine approximate confidence limits for the reliability of the complete mechanism. More precisely, a method of determining approximate confidence limits for the reliability of "series," "parallel," and "series-parallel" systems is given, based on observed failures of the individual components. It is assumed that the failures are independent, and that failures of a given component follow a binomial distribution with unknown parameter, the component reliability. The large-sample properties of the likelihood-ratio test arc then used to construct the appropriate confidence limits for the system reliability. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
21. Short-Cut Multiple Comparisons for Balanced Single and double Classifications: Part 1, Results.
- Author
-
Kurtz, T.E., Link, R.F., Tukey, J.W., and Wallace, D.L.
- Subjects
MULTIPLE comparisons (Statistics) ,CONFIDENCE intervals - Abstract
Methods are provided for the rapid analysis of data in balanced single and double classifications. Sums of ranges provide short-cut measures of variability with only slight loss of efficiency. Tables of factors are provided to convert these sums directly into widths of simultaneous confidence intervals for simple comparisons of group totals. Two illustrative examples are treated in detail. The proposed procedures are recommended lot routine use in initial analyses, though not to the exclusion of more refined procedures. The paper includes a general discussion of multiple comparison procedures: when they should and should not be used, the importance of confidence procedures and their advantages over significance procedures, choice among multiple comparisons confidence procedures, choice and description of error rates. It does not attempt to compare multiple comparison and multiple decision procedures. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
22. HYPOTHESIS TEST AND CONFIDENCE INTERVALS FOR STEADY-STATE CO-EFFICIENTS IN MODELS WITH LAGGED DEPENDENT VARIABLES: SOME NOTES ON FIELLER'S METHOD.
- Author
-
Blomqvist, A. G.
- Subjects
CONFIDENCE intervals ,STATISTICAL hypothesis testing ,ECONOMETRICS ,STATISTICAL sampling ,MATHEMATICAL variables ,MATHEMATICS - Abstract
This article offers hypothesis tests and confidence intervals for steady-state coefficients in models with lagged dependent variables. Variety of econometric problems, and the methods generally used to obtain consistent estimates are not unbiased in small samples. The conventional procedures for significance testing and computation of confidence intervals. Degree of uncertainty surrounding an estimated coefficient.
- Published
- 1973
- Full Text
- View/download PDF
23. System Reliability Assessment from its Components.
- Author
-
O'Neill, T.S.
- Subjects
CONFIDENCE intervals ,ESTIMATION theory ,STATISTICAL methods in engineering reliability ,STATISTICS - Abstract
SUMMARY Design, development and procurement programmes often require assessment of System reliabilities at a time when few or no systems are available for test. In these circumstances estimates based on sub-system data may be sought. This paper examines some methods of estimating lower confidence limits on system reliability for serial systems, based on sub-system test data and compares their performance in some cases of interest using Monte Carlo simulation. [ABSTRACT FROM AUTHOR]
- Published
- 1972
- Full Text
- View/download PDF
24. SIMULTANEOUS TESTS FOR TREND AND SERIAL CORRELATIONS FOR GAUSSIAN MARKOV RESIDUALS.
- Author
-
Krishnaiah, P. R. and Murthy, V. K.
- Subjects
AUTOCORRELATION (Statistics) ,MARKOV processes ,STATISTICAL correlation ,STOCHASTIC processes ,CONFIDENCE intervals ,STATISTICAL sampling ,ECONOMICS ,ECONOMETRICS ,MATHEMATICAL economics - Abstract
In this paper, exact tests are proposed for testing the trend in the presence of autocorrelation and also for testing the trend and autocorrelation simultaneously in a first order Markov process. Also, the simultaneous confidence intervals associated with these tests are derived. These results are extended to a higher order Markov process, [ABSTRACT FROM AUTHOR]
- Published
- 1966
- Full Text
- View/download PDF
25. ESTIMATING SAMPLE SIZE IN COMPUTING SIMULATION EXPERIMENTS.
- Author
-
Fishman, George S.
- Subjects
STATISTICAL sampling ,SIMULATION methods & models ,COMPUTER simulation ,SAMPLE size (Statistics) ,SAMPLE variance ,AUTOREGRESSION (Statistics) ,ESTIMATION theory ,ESTIMATION bias ,T-test (Statistics) ,PROBABILITY measures ,CONFIDENCE intervals ,QUEUING theory - Abstract
A method is described for estimating and collecting the sample size needed to estimate the mean of a process (with a specified level of statistical precision) in a simulation experiment. Steps are also discussed for incorporating the determination and collection of the sample size into a computer library routine that can be called by the ongoing simulation program. We present the underlying probability model that enables us to denote the variance of the sample mean as a function of the autoregressive representation of the process under study and describe the estimation and testing of the parameters of the autoregressive representation in a way that can easily be "built into" a computer program. Several reliability criteria are discussed for use in determining sample size. Since these criteria assume that the variance of the sample mean is known, an adjustment is necessary to account for the substitution of an estimate for this variance. It is suggested that Student's distribution be used as the sampling distribution, with "equivalent degrees of freedom" determined by analogy with a sequence of independent observations. A bias adjustment is described that can be applied to the beginning of the collected data to reduce the influence of initial conditions on events in the experiment. Four examples are presented using these techniques, and comparisons are made with known theoretical solutions. One unfortunate shortcoming of the proposed procedure is that its performance is directly linked to the initially chosen sample size. Our results show that as this sample size increases, the procedure gives results which agree more closely with predicted results. [ABSTRACT FROM AUTHOR]
- Published
- 1971
- Full Text
- View/download PDF
26. A Study of Confidence Interval Financial Statements.
- Author
-
OLIVER, BRUCE L.
- Subjects
ECONOMIC aspects of decision making ,FINANCIAL statements ,PROBABILITY measures ,LOANS ,CONFIDENCE intervals ,COMPARATIVE studies ,ECONOMICS - Abstract
The article reports on a study which examined whether or not one particular form of probabilistic reporting had any material effect on responses to a hypothetical loan decision. A comparison was made of loan decisions made under the disclosure of financial statements and probabilistic statements. The study results did not indicate sufficient differences between the two groups to warrant the rejection of any hypothesis. On this basis it can be concluded that the respondents did not significantly alter their hypothetical loan decisions even though they received two different types of loan statements.
- Published
- 1972
- Full Text
- View/download PDF
27. Asymptotic Expansions Related to Minimum Contrast Estimators
- Author
-
Pfanzagl, J.
- Published
- 1973
28. The Up-and-Down Method for Small Samples with Extreme Value Response Distributions.
- Author
-
Little, R. E.
- Subjects
STATISTICS ,ESTIMATES ,SAMPLE size (Statistics) ,STATISTICAL sampling ,CONFIDENCE intervals ,PARAMETERS (Statistics) - Abstract
Tables for computing LD50 estimates are presented for the extreme value (smallest and largest) distributions based on maximum likelihood and minimum chi-square analyses for small sample up-and-down test outcome sequences of nominal length N, 2 ≦ N ≦ 6. Overall, especially considering the small sample sizes involved, LD50 estimates based on assumed symmetrical distributions seem relatively robust to minor departures from symmetry of the actual response distribution. [ABSTRACT FROM AUTHOR]
- Published
- 1974
- Full Text
- View/download PDF
29. Confidence Interval Estimation for Means After Data Transformations to Normality.
- Author
-
Land, Charles E.
- Subjects
CONFIDENCE intervals ,STATISTICAL hypothesis testing ,STATISTICAL sampling ,MATHEMATICAL models ,STATISTICAL correlation ,PROBABILITY theory - Abstract
When data are transformed to satisfy a spherical normal linear model, the mean θ of a variate in the original scale is a function of the mean μ and variance σ² of a normal variate. We consider several approximate confidence interval methods for θ, including a new method based on exact confidence intervals for linear functions of μ and σ² Monte Carlo estimates of coverage probabilities demonstrate the suitability of the new method for applications involving a wide range of data transformations, parameter values and sample sizes. [ABSTRACT FROM AUTHOR]
- Published
- 1974
- Full Text
- View/download PDF
30. Authors' Reply to Anscombe's Comments.
- Author
-
Kurtz, T.E., Link, R.F., Tukey, J.W., and Wallace, D.L.
- Subjects
MULTIPLE comparisons (Statistics) ,CONFIDENCE intervals ,UNCERTAINTY - Abstract
Responds to F.J. Anscombe's comments on authors' paper titled 'Short-Cut Multiple Comparisons for Balanced Single and Double Classifications: Part 1, Results,' which appeared in the May 1965 issue of 'Technometrics.' Communication of a level of uncertainty of a confidence statement; Decision-theoretic methods; Ways of balancing uncertainties in inferences from complex experiments.
- Published
- 1965
- Full Text
- View/download PDF
31. CONFIDENCE INTERVALS FOR CLUSTERED SAMPLES.
- Author
-
Kish, Leslie
- Subjects
CONFIDENCE intervals ,STATISTICAL sampling ,STATISTICAL hypothesis testing ,HYPOTHESIS ,POPULATION ,TESTING - Abstract
Standard statistical literature has been developed almost entirely in terms of independent observations obtained by simple random sampling (SRS). The statistical tests and confidence intervals one finds in textbooks and in journal articles are based on the assumption that the sample observations were selected independently, at random. Although this assumption either appears in "fine print" or not at all, it constitutes the basis of all confidence intervals and tests of hypotheses, their validity is based on it. The ubiquity of the SRS assumption in statistical theory can be explained by the basic nature of SRS and by its facilitation of interesting theoretical results. On the other hand, most social research, especially survey work, is in fact carried out by means of complex sample designs. Simple random selection of human populations is a rare phenomenon, limited to small and confined populations. Socially important human populations are usually large and widely scattered, so that a simple random selection of them would prove in most cases to be uneconomical and impractical.
- Published
- 1957
- Full Text
- View/download PDF
32. A NOTE ON THE USE OF CONFIDENCE BANDS TO EVALUATE THE RELIABILITY OF A DIFFERENCE BETWEEN TWO SCORES.
- Author
-
Feldt, Leonard S.
- Subjects
TEST scoring ,STATISTICAL hypothesis testing ,STATISTICAL reliability ,CONFIDENCE intervals ,RATING of students ,GRADING of students ,MEASUREMENT errors ,STATISTICAL bias ,EDUCATIONAL tests & measurements - Abstract
The article discusses the use of confidence bands to evaluate the reliability of a difference between two test scores. Over the past ten years, testing authorities have increasingly recommended the use of confidence bands in plotting test score profiles. When a profile of scores is presented in the form of percentile rank bands, attention is naturally drawn to the element of unreliability in the score information. The theory underlying the technique of comparing bands is relatively simple. If a test has a standard error of measurement and the confidence interval is to be obtained for an examinee's true scare, the quantity test quantity is added to and subtracted from his observed score. The resultant interval can be converted into percentile rank terms by referring the limits to the appropriate normative tables. The interval is based on the assumption that errors of measurement are normally distributed with constant at all true score levels. It is shown here that a requirement of non-overlap may result in a failure to recognize large true-score differences when they exist.
- Published
- 1967
- Full Text
- View/download PDF
33. A COMMENT ON THE ANALYSIS OF DATA GENERATED BY SIMULATION EXPERIMENTS.
- Author
-
Naylor, Thomas H. and Wonnacott, Thomas H.
- Subjects
DISTRIBUTION (Probability theory) ,CONFIDENCE intervals ,STATISTICAL hypothesis testing ,SELF-service (Economics) ,STATISTICAL sampling ,HYPOTHESIS ,INFERENCE (Logic) ,SIMULATION methods & models - Abstract
This article comments on an article related to improvements in current statistical analysis of computer-simulated data by Rosser T. Nelson. The purpose of the experiment presented in Nelson's article was to evaluate the effects of four different labor force sizes, three different queue disciplines, and five different machine selection procedures on the mean time in system, the variance of the time in system, the maximum time in system, and the distribution function of time in system. The authors discuss several inferences made by Nelson regarding his experiments, as the danger of making these assumptions.
- Published
- 1970
- Full Text
- View/download PDF
34. AN EXPECTED GAIN-CONFIDENCE LIMIT CRITERION FOR PORTFOLIO SELECTION.
- Author
-
Baumol, William J.
- Subjects
CONFIDENCE intervals ,STANDARD deviations ,PORTFOLIO management (Investments) ,STATISTICAL sampling ,ECONOMICS ,INVESTORS ,ANALYSIS of variance ,RISK assessment ,INVESTMENTS ,ASSET allocation ,FINANCIAL risk ,FINANCIAL risk management - Abstract
A new efficiency criterion is proposed for the Markowitz portfolio selection approach. It is shown that the use of standard deviation as a measure of risk in the original Markowitz analysis and elsewhere in economic theory is sometimes unreasonable. An Investment with a relatively high standard deviation (a) will be relatively safe if its expected value (E) is sufficiently high. For the net result may be a high expected floor, E - Kσ, beneath the future value of the investment (where K is some constant). The revised efficient set is shown to eliminate paradoxical cases arising from the Markowitz calculation. It also simplifies the task left to the investor. For it yields a smaller efficient set (which is a subset of the Markowitz efficient set) and therefore reduces the range of alternatives from among which the investor must still select his portfolio. The proposed criterion may also be somewhat more easily understood by the nonprofessional. [ABSTRACT FROM AUTHOR]
- Published
- 1963
- Full Text
- View/download PDF
35. 39--AN APPRAISAL OF THE LENGTH MEASURES USED FOR COTTON FIBRES.
- Author
-
Woo, J. L.
- Subjects
EVALUATION ,COTTON ,FIBERS ,LENGTH measurement ,MATHEMATICS ,CHARTS, diagrams, etc. ,CONFIDENCE intervals ,PHYSICAL measurements ,PLANT products - Abstract
Four laboratory measures used for determining cotton-fibre length are expressed in mathematical forms. The measures are compared with reference to a hypothetical staple diagram. Length-uniformity measures are compared and their limits discussed. [ABSTRACT FROM AUTHOR]
- Published
- 1967
- Full Text
- View/download PDF
36. Confidence Intervals for Percentage Reductions.
- Author
-
ABRAMS, ALAN M., MCCLENDON, B. JERALD, and HOROWITZ, HERSCHEL S.
- Subjects
CONFIDENCE intervals ,BAYESIAN analysis ,STATISTICS ,CAVITY prevention ,TREATMENT effectiveness ,PREVENTIVE dentistry ,CONTROL groups - Abstract
A method of using information from previous studies is applied to the problem of estimating treatment effectiveness of a preventive agent in terms of percentage reduction. The Bayesian statistical method allows the use of such prior information in analyzing the results of a study. [ABSTRACT FROM AUTHOR]
- Published
- 1972
- Full Text
- View/download PDF
37. POST ACCORD INTEREST RATES: A REPLY.
- Author
-
SMITH, V. KERRY and MARCIS, RICHARD G.
- Subjects
INTEREST rates ,CONFIDENCE intervals ,BUSINESS cycles ,TIME series analysis ,ESTIMATION theory ,ECONOMIC seasonal variations ,ACADEMIC debating ,ECONOMIC indicators ,MONETARY policy - Abstract
The article presents a reply by the authors to comments made by economist John E. Pippenger on their spectral and cross spectral analysis of post-accord interest rates in the article "A Time Series Analysis of Post-Accord Interest Rates," originally published in the June 1972 issue of "The Journal of Finance." The authors state that the intention of their study was to provide an inventory of the primary components of both policy operations by the monetary authorities and the fundamental economic determinants of interest rate movements.
- Published
- 1974
- Full Text
- View/download PDF
38. Asymptotic Expansions Related to Minimum Contrast Estimators
- Author
-
J. Pfanzagl
- Subjects
Statistics and Probability ,Discrete mathematics ,62E20 ,Maximum likelihood estimators ,Asymptotic distribution ,Estimator ,M-estimator ,Type (model theory) ,Asymptotic theory (statistics) ,asymptotic expansions ,Statistics ,tests ,Statistics, Probability and Uncertainty ,Likelihood function ,minimum contrast estimators ,Bootstrapping (statistics) ,confidence intervals ,62F10 ,Mathematics ,Probability measure - Abstract
This paper contains an Edgeworth-type expansion for the distribution of a minimum contrast estimator, and expansions suitable for the computation of critical regions of prescribed error (type one) as well as confidence intervals of prescribed confidence coefficient. Furthermore, it is shown that, for one-sided alternatives, the test based on the maximum likelihood estimator as well as the test based on the derivative of the log-likelihood function is uniformly most powerful up to a term of order $O(n^{-1})$. Finally, an estimator is proposed which is median unbiased up to an error of order $O(n^{-1})$ and which is--within the class of all estimators with this property--maximally concentrated about the true parameter up to a term of order $O(n^{-1})$. The results of this paper refer to real parameters and to families of probability measures which are "continuous" in some appropriate sense (which excludes the common discrete distributions).
- Published
- 1973
39. On Estimating the Common Mean of Two Normal Distributions
- Author
-
Arthur Cohen and Harold B. Sackrowitz
- Subjects
Statistics and Probability ,Mean squared error ,Truncated mean ,Estimator ,unbiased estimators ,Standard deviation ,Efficient estimator ,Bias of an estimator ,Sample size determination ,minimax estimators ,Bessel's correction ,Statistics ,inter-block information ,Common mean ,Statistics, Probability and Uncertainty ,62C15 ,confidence intervals ,62F10 ,Mathematics - Abstract
Consider the problem of estimating the common mean of two normal distributions. Two new unbiased estimators of the common mean are offered for the equal sample size case. Both are better than the sample mean based on one population for sample sizes of 5 or more. A slight modification of one of the estimators is better than either sample mean simultaneously for sample sizes of 10 or more. This same estimator has desirable large sample properties and an explicit simple upper bound is given for its variance. A final result is concerned with confidence estimation. Suppose the variance of the first population, say, is known. Then if the sample mean of that population, plus and minus a constant, is used as a confidence interval, it is shown that an improved confidence interval can be found provided the sample sizes are at least 3. 1. Introduction and summary. Consider random samples of size n from each of two independent normal distributions. The first distribution has mean 0 and variance a 2 and the second has mean 6 and variance a 2. Let X' - (X1, X2, * , X.) and Y' = (Y,, Y2, *.*, Y,) denote these samples. The problem is to estimate the common mean 0 when the loss function is (t - 0)2/ar.2. This loss function is chosen for convenience. Squared error loss or squared error divided by a positive function of (vx2, a52) could also be taken. This problem of estimating the common mean and the related problem of recovery of interblock information has been studied in several papers. For a brief bibliography and justification of some of the results studied here the reader is referred to the introduction of Brown and Cohen [2]. In this paper two new unbiased estimators for the common mean are suggested for the equal sample size case. Each estimator is uniformly better than the sample
- Published
- 1974
40. A Statistical Confidence Interval for True Per Cent Reduction in Caries-Incidence Studies.
- Author
-
DUBEY, SATYA D., LEHNHOFF, ROBERT W., and RADIKE, ARTHUR W.
- Subjects
CAVITY prevention ,CLINICAL drug trials ,CONFIDENCE intervals ,PHARMACEUTICAL research ,PREVENTIVE dentistry - Abstract
This article discusses the results of a research study conducted to establish a procedure for calculating a confidence interval to determine the efficacy of proposed anticariogenic agents. The researchers state that most studies of anticariogenic agents base their success on per cent reduction of caries incidence.
- Published
- 1965
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.