Back to Search Start Over

Self-Correcting Problems in Statistics

Authors :
Paul Jedamus
G. A. Whitmore
John Neter
William Wasserman
Source :
Journal of the American Statistical Association. 68:752
Publication Year :
1973
Publisher :
JSTOR, 1973.

Abstract

The task of writing a self-correcting problem book in statistics that can be used with many different texts is most difficult. Whitmore, Neter and Wasserman make such an attempt in Self-Correcting Problems in Statistics. The scope of the material is good, including all of the topics covered in most introductory business and economics texts such as descriptive statistics, classical inference and hypothesis testing, Bayesian analysis, simple regression, time series analysis and index numbers. There are also sections on multiple regression, tests of goodness of fit, contingency tables and analysis of variance. In an attempt to minimize the problem of incompatibility, the authors provide an extensive glossary of symbols and a table crossreferencing sections of their book with chapters in thirty standard texts. The authors opt for a minimum of exposition in tying together the exercises and problems. A consequence of this decision is a tendency to appear cook-bookish and formula oriented, especially where the subject matter is relatively difficult. This tendency is compensated for to some extent by introducing new material in the problems and trying to get the student to generalize about the nature of the statistical process in question by observing what happens in particular problem situations. This process has disadvantages as well as obvious advantages. While the ability of the student to observe what is happening and to generalize from particular situations is promoted, the situations are contrived so that the student is led to make the right generalization on the basis of very little evidence. Perhaps he should also be warned that when he tries this in non-structural, real world situations, he might not be so lucky. New material, unfortunately, is frequently introduced in review problems. The first time the student is confronted specifically with the notion that s can be used as an estimator of a is in the answer to review question 2b (p. 84). The fact that op is a maximum when p .5 is found in the answer to review question 2d (p. 92). The sign test is introduced in review problem 1 (p. 166). The fact that exponential smoothing can be more than single order is noted only in the answer to review question 2c (p. 316). Within the conditions just stated, and remembering that this is a supplement to a text rather than a text itself, most of the topics are well covered. The exposition is usually clear and there are exceptionally few typographical or computational errors. The worst sections by far are the Bayesian ones, where it is clear that the authors' hearts really are elsewhere. Decision making under uncertainty is left entirely in the air: the basis for a choice among maximax, maximin or minimax of regret criteria is completely ignored. The argument for decision making under uncertainty is weak. Further, no attempt is made to relate the Bayesian sections to those on classical inference. Caught in the no man's land between a problem book and a full text, the authors sometimes fall into other types of strategic errors. They do not define terms carefully. For example, they do not define "probability" (pp. 36-37), "discrete" (p. 49), "simple random sample" (p. 83), or "unit normal loss function" (p. 267). Important concepts are sometimes not explained. For example, on p. 21, in developing the formula X = 2;fX/n, no mention is made of the fact that X in this context represents the mid-values of the classes, or that the sample mean so obtained is an approximation contingent upon a number of assumptions about the frequency distribution. On p. 28, s=

Details

ISSN :
01621459
Volume :
68
Database :
OpenAIRE
Journal :
Journal of the American Statistical Association
Accession number :
edsair.doi...........6a9ff3c81d219541d43d34341a22bb71