1,086 results on '"Sample size determination"'
Search Results
2. THE USE OF COHERENT OPTICAL PROCESSING TECHNIQUES FOR THE AUTOMATIC SCREENING OF CERVICAL CYTOLOGIC SAMPLES
- Author
-
J. Mendelsohn, R. E. Kopp, H. Stone, Benjamin J. Pernick, R. Wohlers, and J. Lisa
- Subjects
Pathology ,medicine.medical_specialty ,Histology ,Materials science ,Sample (material) ,Statistics as Topic ,Uterine Cervical Neoplasms ,Cervix Uteri ,Fourier spectrum ,symbols.namesake ,medicine ,Humans ,Scattering, Radiation ,Malignant cells ,Microscopy ,Autoanalysis ,Training set ,Fourier Analysis ,Staining and Labeling ,Computers ,Lasers ,Cervical cells ,Optical processing ,Fourier transform ,Sample size determination ,symbols ,Female ,Anatomy ,Mathematics ,Biomedical engineering - Abstract
Coherent optical signal processing techniques applied to the screening of exfoliated cytologic cervical samples were evaluated. The two-dimensional Fourier spectrum of some 80 isolated cells was obtained from high resolution cell photographs and recorded. Quantitative spectrum data were collected on a training set consisting of 15 categorized malignant cells taken from slides diagnosed as malignant and 10 categorized normal cells taken from slides diagnosed as normal. A variety of transform parameters were measured and certain combinations of them were found which separated completely the normal from malignant cells. Certain parameters were shown to be functionally related to the cell diameter, nuclear diameter and nuclear density. Other parameters were found which appear to be related to other cell features such as clumping of nuclear deoxyribonucleic acid. When they were used in combination with the previous parameters, they greatly enhanced the discriminatory capabilities. Computer model studies formed the basis for our experimental design and parameter selection and have validated much of our experimental results. Although the sample size to date is admittedly small, our results have been very encouraging. The purpose of this paper is to report on the initial results obtained in a study to determine the effectiveness of coherent optical processing techniques when applied to the screening of’ exfoliated cervical cytologic samples for malignancy. These techniques rely on the diffraction properties of coherent light which are closely related to scattering techniques used by the groups at the Lawrence Livermore Laboratories and the Max-Planck Institute for cytologic screening and sample enrichment. The specific approach used in this investigation is concerned with the relationship the cell image has to its two-dimensional Fourier transform, which is generated optically by a well known method described below. The success of the screening approach depends upon the ability to measure and discriminate between the Fourier spectrum of malignant and Fourier spectrum of normal exfoliated cervical cells. In most instances the measured discriminating parameters can be functionally related to well known cytologic characteristics which are conventionally used to discriminate between malignant and normal exfoliated cervical cells.
- Published
- 1974
3. Psychological Needs of Suburban Male Heroin Addicts
- Author
-
Sidney Merlis, Charles Sheppard, John Fracchia, and Elizabeth Ricca
- Subjects
Adult ,Male ,medicine.medical_specialty ,Methadone maintenance ,Adolescent ,Personality Inventory ,Personality development ,media_common.quotation_subject ,Population ,Education ,Diagnosis, Differential ,Sex Factors ,mental disorders ,medicine ,Humans ,Psychiatry ,education ,General Psychology ,media_common ,Motivation ,education.field_of_study ,Heroin Dependence ,Addiction ,Edwards Personal Preference Schedule ,Achievement ,Prognosis ,Aggression ,Psychotherapy ,Sample size determination ,Business, Management and Accounting (miscellaneous) ,Objective test ,Personality Assessment Inventory ,Psychology ,Clinical psychology - Abstract
Summary The Edwards Personal Preference Schedule (EPPS), an objective test of Murray's theory of personality development, was completed by 51 male applicants to a county methadone maintenance program. Tests of significance (t) were applied to the suburban heroin addict sample (n = 51) and to the general adult male normative sample (n = 4031) data to determine if they scored differently on the 15 EPPS psychological need constructs. Because of the disproportionate sample sizes, a hypothetical sample (n = 51) was drawn from the normative sample for comparative purposes. Questions raised in these analyses were the following: Do heroin addicts differ in psychological need structure from the general adult male population? What motivates and directs behavior? What are the factors leading to the psychological availability to abusing drugs? What may make addicts resistant to psychotherapy?
- Published
- 1974
4. A Cross-Validation Approach to Sample Size Determination for Regression Models
- Author
-
Colin N. Park and Arthur L. Dudycha
- Subjects
Statistics and Probability ,Sample size determination ,Bayesian multivariate linear regression ,Statistics ,Regression analysis ,Statistics, Probability and Uncertainty ,Segmented regression ,Regression diagnostic ,Factor regression model ,Cross-validation ,Nonparametric regression ,Mathematics - Abstract
A cross-validation approach to the a priori determination of sample size requirements and the a posteriori estimates of the validity of a derived regression equation is developed for regression models, where sampling from multivariate normal populations is discussed in particular. Tables of sample size estimates are presented for the random model and their applications illustrated. An algorithm is given to obtain tables for the fixed model directly from those for the random case.
- Published
- 1974
5. One-sided tolerance limits for a normal population based on censored samples
- Author
-
C. B. Sampson and I. J. Hall
- Subjects
Statistics and Probability ,Normal distribution ,One sided ,Sample size determination ,Applied Mathematics ,Modeling and Simulation ,Statistics ,Econometrics ,Normal population ,Sample (statistics) ,Statistics, Probability and Uncertainty ,Mathematics - Abstract
A procedure for constructing one-sided tolerance limits for a normal distribution which are based on a censored sample is given. The factors necessary for the calculation of such limits are also given for several different sample sizes
- Published
- 1973
6. The detection and correction of outlying determinations that may occur during geochemical analysis
- Author
-
Peter K. Harvey
- Subjects
Set (abstract data type) ,Geochemistry and Petrology ,Sample size determination ,Computer science ,Monitoring data ,Outlier ,Data mining ,Replicate ,computer.software_genre ,computer - Abstract
‘Wild’, ‘rogue’ or outlying determinations occur periodically during geochemical analysis. Existing tests in the literature for the detection of such determinations within a set of replicate measurements are often misleading. This account describes the chances of detecting outliers and the extent to which correction may be made for their presence in sample sizes of three to seven replicate measurements. A systematic procedure for monitoring data for outliers is outlined. The problem of outliers becomes more important as instrumental methods of analysis become faster and more highly automated; a state in which it becomes increasingly difficult for the analyst to examine every determination. The recommended procedure is easily adapted to such analytical systems.
- Published
- 1974
7. The characterization of patchy mixtures
- Author
-
A.M. Scott and J. Bridgwater
- Subjects
Work (thermodynamics) ,Correlation coefficient ,Chemistry ,Applied Mathematics ,General Chemical Engineering ,Sample (material) ,Isotropy ,Analytical chemistry ,Thermodynamics ,Contrast (statistics) ,General Chemistry ,Composition (combinatorics) ,Industrial and Manufacturing Engineering ,Characterization (materials science) ,Sample size determination - Abstract
Two-component isotropic mixtures may be characterized by a correlation coefficient between sample compositions. Previous studies of this type have often been hard to use in those practical situations in which it is clear that sample composition depends on sample size. Alternatively the composition-distance relationship has not been fully preserved. Here it is shown that a more satisfactory characterization is obtained by basing a correlation coefficient upon elements of the mixture, an element being much smaller than the smallest constituent unit capable of independent movement and thus having the composition of one of the pure components. This correlation coefficient is independent of element size and may be used to provide a precise description of the statistical properties of the mixture. For example the variance of macroscopic circular and spherical samples may be deduced as may the correlation coefficient for linear macroscopic samples. The theory leads to a discussion of other work, of free space in particulate mixtures and of the necessary number of parameters required in the correlation coefficient. A practical result is that the variance of samples taken from a non-random isotropic mixture becomes inversely proportional to sample volume if the samples are sufficiently large, in contrast to work by Bourne and Landry.
- Published
- 1974
8. Maximum likelihood estimates of the parameters of a mixture of two regression lines
- Author
-
David W. David
- Subjects
Statistics and Probability ,Score test ,Restricted maximum likelihood ,Estimation theory ,Sample size determination ,Statistics ,Linear regression ,Expectation–maximization algorithm ,Statistics::Methodology ,Maximum likelihood sequence estimation ,Likelihood function ,Statistics::Computation ,Mathematics - Abstract
The method of maximum likelihood is used to estimate the parameters of a mixture of two regression lines, The results of a small simulation study show that when the sample size exceeds 250 and the regression lines are more than three standard deviations apart for at least one half of the data, the maximum likelihood estimates are reliable. When this is net the case their sampling variances are so large that the estimates may not be reliable.
- Published
- 1974
9. Poisson process and distribution-free statistics
- Author
-
Meyer Dwass
- Subjects
Distribution free ,Statistics and Probability ,Applied Mathematics ,010102 general mathematics ,Poisson process ,Empirical distribution function ,01 natural sciences ,Connection (mathematics) ,symbols.namesake ,010104 statistics & probability ,Sample size determination ,symbols ,Point (geometry) ,Statistical physics ,0101 mathematics ,Mathematics - Abstract
The well-known connection between the Poisson process and empirical c.d.f.'s is exploited from a new point of view. Distributions of functions of empirical c.d.f.'s for finite sample size n are explicitly described in some new examples, and new qualitative information is obtained for some classical examples.
- Published
- 1974
10. The Effects of Violating the Normality Assumption Underlying r
- Author
-
Zachary H. Levine and Richard A. Zeller
- Subjects
education.field_of_study ,Sociology and Political Science ,media_common.quotation_subject ,05 social sciences ,Population ,050401 social sciences methods ,0506 political science ,Interpretation (model theory) ,0504 sociology ,Sample size determination ,Statistics ,050602 political science & public administration ,Econometrics ,education ,Social Sciences (miscellaneous) ,Normality ,media_common ,Mathematics - Abstract
Using randomly generated samples from populations having varying rhos, different shapes of distributions, and varying sample sizes, the effects of violating the population normality assumption underlying r were examined. Results indicate that the normality assumption underlying r is robust (i. e., that violation of the population normality assumption does not seriously alter the interpretation of r). Violation of the population normality assumption appears, therefore, to be insufficient reason to deny r a place as a major tool for sociological analysis.
- Published
- 1974
11. Tests on categorical data from the unionintersection principle
- Author
-
Lyman L. McDonald, Kim D. Weaver, and Donald A. Anderson
- Subjects
Statistics and Probability ,Wishart distribution ,Multivariate analysis ,Multinomial test ,Sample size determination ,Homogeneity (statistics) ,Statistics ,Test statistic ,Multinomial distribution ,Categorical variable ,Mathematics - Abstract
The union-intersection principle developed by S. N. Roy [13] has become an important tool in multivariate analysis. In this paper the union-intersection principle is applied to obtain some of the standard tests of hypothesis on categorical data, as well as a new test for homogeneity in anr×c table. In particular, tests of hypothesis on a single multinomial distribution and tests for the comparison of two multinomials are derived on the union-intersection principle and the corresponding simultaneous confidence intervals obtained. A test for homogeneity in anr×c table is derived on the union-intersection principle, and for the case of equal sample size from each of ther populations it is shown that the test statistic is distributed as the largest root of a central Wishart matrix.
- Published
- 1974
12. Efficient Sampling by Artificial Attributes
- Author
-
Avraham Beja and Shaul P. Ladany
- Subjects
Statistics and Probability ,Variable (computer science) ,Sample size determination ,Applied Mathematics ,Modeling and Simulation ,Statistics ,Sampling design ,Sampling (statistics) ,Cluster sampling ,Fraction (mathematics) ,Limit (mathematics) ,Lot quality assurance sampling ,Mathematics - Abstract
Hypotheses about the fraction of items in a lot possessing a “specification attribute” X < L can be tested by generally sampling the variable X or directly sampling the attribute of interest. When the process variance is known, it is often more efficient to test against “compressed limits” for one or more “artificial” attributes X < La , X < Lb etc. This study discusses the efficient choice of one or two compressed limits. General guidelines for this choice are suggested, and then evaluated under many hypothetical test specifications. One compressed limit offered ~40% – 97% savings over direct attribute sampling; two limits allowed about 20% further savings.
- Published
- 1974
13. A note on the relationship between size of area and soil moisture variability
- Author
-
S.G. Reynolds
- Subjects
Moisture ,Sample size determination ,Moisture measurement ,Coefficient of variation ,Environmental science ,Gravimetric analysis ,Soil science ,Water content ,Water Science and Technology - Abstract
The influence of size of area on soil moisture variability in the surface 5–8 cm is examined using thirteen plot sizes ranging from 1 m 2 to 6 million m 2 . The gravimetric weight basis method of moisture measurement is employed. Although size of area and moisture variability are shown to be closely related with r 2 of 0.70, in practical terms variability is probably best considered in terms of three broad areal classes (1–1,000 m 2 , 1,000–100,000 m 2 and 100,000-approx. 10 million m 2 ) each of which has representative values for coefficient of variation and required sample size.
- Published
- 1974
14. The Proportional Closeness and the Expected Sample Size of Sequential Procedures for Estimating Tail Probabilities in Exponential Distributions
- Author
-
S. Zacks
- Subjects
Statistics and Probability ,Sequential estimation ,Exponential distribution ,Sample size determination ,Modeling and Simulation ,Statistics ,Applied mathematics ,Asymptotic distribution ,Probability distribution ,Probability density function ,Random variable ,Mathematics ,Exponential function - Abstract
This paper discusses the problem of exact determination of the proportional closeness probaility and expected sample size of a sequential procedure for estimating tail probabilities in exponential distribution. The stopping variable is based on the asymptotic normality of the maximum likelihood estimator. For small and medium size samples and exact recursive rocedure is derived. This procedure fails in large sample cases due to computer problems. For this reason large sample approximations are developed. These approximations are based on an analogous problem for Wiener processes. It provides very good numrical procedures.
- Published
- 1974
15. Spatial dispersion of an estuarine benthic faunal community
- Author
-
Rutger Rosenberg
- Subjects
geography ,Biomass (ecology) ,geography.geographical_feature_category ,Ecology ,Fauna ,Sediment ,Estuary ,Aquatic Science ,Biology ,Oceanography ,Benthic zone ,Sample size determination ,Spatial dispersion ,Dispersion (optics) ,Ecology, Evolution, Behavior and Systematics - Abstract
By means of a box-sampler 20 moderately undisturbed sediment samples were obtained, which were subsampled on board the ship. The fauna in the upper 0–5 cm of the sediments was compared to that in the 5–10 cm layer; almost all species collected in both strata were found in the upper 0–5 cm. About 64% of the individuals and 74% of the biomass were restricted to this upper layer. A sample area of 0.5 m2 (depth 0–10 cm) was found to be sufficient to make a quantitative evaluation of the benthic community. The horizontal dispersion of the macrobenthic community was studied using the variance/mean ratio and its dependence on sample size is discussed. The abundant species occurred in patches larger than 0.06 m2 and high densities were correlated with aggregation.
- Published
- 1974
16. Homogeneity of Variance
- Author
-
John T. Roscoe and William R. Veitch
- Subjects
Levene's test ,Sample size determination ,Hartley's test ,Monte Carlo method ,Statistics ,Developmental and Educational Psychology ,Kurtosis ,Sampling (statistics) ,Cochran's C test ,Mathematics::Geometric Topology ,Education ,Cochran's Q test ,Mathematics - Abstract
A Monte Carlo technique was employed in order to compare the relative power and robustness of the Bartlett, Cochran, Hartley, and Levene tests for homogeniety of variance. The Cochran test proved to be the most robust and powerful procedure when sampling from normal, leptokurtic, and skewed distributions; while the Levene test proved most useful with uniform data. The Cochran test was successfully adapted for use with samples of unequal size by utilizing the harmonic mean of the sample sizes for obtaining tabled critical values.
- Published
- 1974
17. Repeated significance test II, for hypotheses about the normal distribution
- Author
-
Ester Samuel-Cahn
- Subjects
Statistics and Probability ,Normal distribution ,Sample size determination ,Modeling and Simulation ,Statistical significance ,Statistics ,Structure (category theory) ,Z-test ,Variance (accounting) ,Power function ,Brownian motion ,Mathematics - Abstract
For testing H0:μ 0 and against when the observations are independent normal with known variance, tests of the following structure are considered: For fixed n , stop with the first i≤n such that and reject H0, . Otherwise stop with n observations and accept H0 . Bounds on the power function and expected sample size are obtained, and these become exact limits as n→∞. The power function of these tests never falls below 94% of that of the corresponding nonsequential UMP (UMPU) tests.
- Published
- 1974
18. Numerical Techniques for Evaluating Sample Information
- Author
-
Chi-Yuan Lin
- Subjects
Statistics and Probability ,Mathematical optimization ,Applied Mathematics ,Multiple integral ,Monte Carlo method ,Sample (statistics) ,Numerical integration ,Sample size determination ,Modeling and Simulation ,Expected value of sample information ,Nyström method ,Applied mathematics ,Quasi-Monte Carlo method ,Mathematics - Abstract
This paper is concerned with the evaluation of multiple integrals to determine the Expected Value of Sample Information (EVSI). It first examines the accuracy and the computational efficiency of the existing numerical methods—the numerical integration method, the Monte Carlo method, and the fixed fractile method—as applied to double integrals. It then develops two new methods—the numerical integration method and the quadrature method. These methods are applicable, not only to double integrals, but also to integrals of higher dimensions. The proposed numerical integration method is made possible by deriving an alternate expression for EVSI. This method yields accurate results, which allows us to ascertain the accuracy of other methods. The quadrature method has greatly increased the computational efficiency over the methods presently available. This efficiency makes it feasible to find optimal sample sizes for problems in multivariate statistical analysis.
- Published
- 1974
19. Uniform asymptotic joint normality of sample quantities in censored cases
- Author
-
Tadashi Matsunawa
- Subjects
Statistics and Probability ,Statistics::Theory ,Multivariate random variable ,Covariance matrix ,media_common.quotation_subject ,Order statistic ,Sample (statistics) ,Joint probability distribution ,Sample size determination ,Statistics ,Applied mathematics ,Normality ,Quantile ,Mathematics ,media_common - Abstract
The asymptotic joint distribution of an increasing number of sample quantiles as the sample size increases, when the underlying sample is censored, is shown to be asymptotically uniformly (or type (B) d ) normally distributed under fairly general conditions. The discussions for uncensored cases have been given by [4].
- Published
- 1973
20. Fixed width confidence intervals for restricted parameter range
- Author
-
J.C. Arnold and R.L. Andrews
- Subjects
Statistics and Probability ,Sequential estimation ,Location parameter ,Sample size determination ,Modeling and Simulation ,Statistics ,Credible interval ,Confidence distribution ,CDF-based nonparametric confidence interval ,Robust confidence intervals ,Confidence interval ,Mathematics - Abstract
Confidence intervals are presented for a location parameter that is known to be restricted to some range. Fixed-width intervals are given first for a fixed sample size with both the variance known and also unknown. Double sampling and sequential sampling procedures are then introduced which find a fixed width interval with a fixed minimum confidence coefficient.
- Published
- 1974
21. Power of Tests for Equality of Covariance Matrices
- Author
-
Richard L. Greenstreet and Robert J. Connor
- Subjects
Statistics and Probability ,Score test ,Applied Mathematics ,Multivariate normal distribution ,Covariance ,Likelihood principle ,Combinatorics ,Estimation of covariance matrices ,Sample size determination ,Modeling and Simulation ,Likelihood-ratio test ,Statistics ,Mathematics ,Statistical hypothesis testing - Abstract
In this paper a Monte Carlo study of four test statistics used to test the null hypothesis that two or more multivariate normal populations have equal covariance matrices is presented. The statistics −2 In λ, −2 In W, −2ρ2 In λ and −2ρ2 ln W, where λ is the likelihood ratio criterion and W is Bartlett's modification of λ, are investigated. The reslllts indicate that for small samples the actual significance levels of the first, two statistics are somewhat greater than the nominal significance levels set, whereas for the latter two statistics the actual significance levels are close to the nominal levels. For these test statistics, −2ρ P 1 In h and −2ρ2 In W, the power is seen to be essentially identical; fnrther it increases as the sample size increases, as the inequality of the covariance matrices increases, and as the number of variates increases. Also, their power is seen to be a concave function of the number of populations.
- Published
- 1974
22. Induced safety algorithm for hydrologic design under uncertainty
- Author
-
Ferenc Szidarovszky and Istvan Bogardi
- Subjects
Return period ,State variable ,Engineering ,Distribution function ,Distribution (number theory) ,Sample size determination ,business.industry ,Flow (psychology) ,Probability density function ,business ,Algorithm ,Uncertainty reduction theory ,Water Science and Technology - Abstract
The induced safety algorithm (ISA) is a method for calculating the margin of safety in the design of hydrologic structures under the uncertainty due to finite sample size. The ISA may be applied to the design of a new structure or the redesign of an existing one; the initial design criterion may be calculated by benefit-cost or be prescribed by regulation. The optimum decision is reached by maximizing the sum of three terms (in case of redesign to a larger value): the discounted economic benefit due to uncertainty reduction, the average loss averted, and the incremental construction cost. The uncertainty on the distribution function of the state variable (yearly peak flow) is encoded by the probability density function of flow pertaining to a fixed return period. Simulation is used to calculate the distribution of the state variable. Two examples of levee design in Hungary illustrate the method.
- Published
- 1974
23. Biochemical studies on the deep-water pelagic community of Korsfjorden, Western Norway methodology and sample design
- Author
-
Ulf Båmstedt
- Subjects
Ecology ,Sample size determination ,Sampling design ,Extraction (chemistry) ,Mineralogy ,Pelagic zone ,Desiccator ,Aquatic Science ,Biology ,Pulp and paper industry ,Deep water - Abstract
Material which had been dried for two days at 70°C and homogenized was dried for some further hours and stored in a desiccator until used for analysis. Care was taken always to weigh samples at constant temperature. Ash values were obtained most reliably and quickly by putting the material into a cold furnace and heating to 500°C. Suitable methods for estimation of chitin, lipid, protein, and carbohydrate in small quantities have been adopted, with modifications where necessary, and an extraction apparatus for lipids has been devised. Analytical results have been subjected to an analysis of variance on the basis of which recommendations are made as to sample size and the analysis of replicates and of more than one individual.
- Published
- 1974
24. Power Analysis of Research in Counselor Education
- Author
-
Richard F. Haase
- Subjects
Power (social and political) ,Clinical Psychology ,Power analysis ,Sample size determination ,Applied psychology ,Developmental and Educational Psychology ,Counselor education ,Research needs ,Psychology ,Statistical power ,Education ,Counseling research - Abstract
Statistical power analysis as a necessary and desirable activity in counselor education research is reviewed and discussed. Power of counseling research is discussed with respect to alpha levels, sample size, and size of effect. Four years of counselor education research are reviewed and analyzed according to the criteria of statistical power. Recommendations for increasing the power of research in counselor education are made.
- Published
- 1974
25. The Severity of Supposed Wave Conditions in Estimating Extreme Values of Wave Induced Response Variables of a Ship
- Author
-
Hajimu Mano
- Subjects
education.field_of_study ,Geography ,Sample size determination ,Climatology ,Population ,Range (statistics) ,Significant wave height ,education ,Extreme value theory ,Random variable ,Value (mathematics) ,Standard deviation - Abstract
The supposed wave condition has considerable effects on the theoretically estimated extreme value of wave induced random variable of a ship. Author showed that the severity of a given wave condition is evaluated by the maximum wave height and its frequency of occurrence in each wave period interval, in the previous paper1) . In this paper, applying the evaluation method, author clarifies the severity of wave condition in winter which is considered as the roughest one in all seasonal wave conditions.It is found out by observation of wave statistics in the North Atlantic by Walden3) , those by Hogben and Lumb4) (sea area 1, 2, 6 and 7), and those in the North Pacific by 80 th Committee of the Japan Ship Research Association5) (sea zone 3), that at longer wave period intervals the highest wave in year is observed in winter in almost cases but at shorter wave period intervals the one is observed in other season than winter in some cases. Accordingly, though the extrem value of the variable estimated by supposing wave condition in winter instead of that in all seasons is larger at most about 10% than the one in all seasons generally, sometimes the value is less than the latter.The population of the variable in all seasons is total sum of the population of it in each season. As it is expected that the population in winter includes the extreme value in all seasons which is to be regarded as the exact value for a ship, the value in winter is compared with that in all seasons. The relation between these two values varies according to the wave period Tmax at which the maximum standard deviation of short-term distribution of the variable is observed. In the case where Tmax is over 5 sec the former is nearly equal to the latter in most cases, but sometimes the former is less than the latter considerably where Tmax is from 5 to 9 sec.Similar relation must be found between the extreme value predicted by the result of long-term full scale measuement of the variable in winter and the exact value. The effect of sample size is also to be considered in this case, because the period of full scale test is usually shorter than the observation period of such wave statistics described above. It is concluded by my study that the value may be nearly equal to the exact value where Tmax is from 7 to 13 sec and may be considerably less than the exact one in other range of Tmax. This means that the correct extreme value of wave bending moment may be predicted by full scale measurement where length of ship is from 120 to 400 m (cargo ship) or from 100 to 320 m (tanker), but to predict the value by this method may be under estimation where ship length is out of the ranges.The method of seasonal wave observation to get more correct data for estimation of the extreme value than the wave statistics in winter is studied. It is made clear by analysis of above wave data, that the value of the variable in winter and autumn in the North Atlantic or in winter and spring in the North Pacific agrees with that in all seasons accurately even if Tmax is in the range from 5 to 9 sec. This is an useful guide of wave observation when continuous measurement is not allowed.
- Published
- 1974
26. Optimal Sampling Schemes for Estimating System Reliability by Testing Components—I: Fixed Sample Sizes
- Author
-
Donald A. Berry
- Subjects
Statistics and Probability ,Mathematical optimization ,Bayes' theorem ,Quadratic equation ,Series (mathematics) ,Sample size determination ,Statistics ,Bayesian probability ,Sampling (statistics) ,Function (mathematics) ,Statistics, Probability and Uncertainty ,Reliability (statistics) ,Mathematics - Abstract
A Bayesian decision theoretic approach is employed to compare sampling schemes designed to estimate the reliability of series and parallel systems by testing individual components. Quadratic loss is assumed and schemes are found which minimize Bayes risk plus sampling cost. Several kinds of initial information concerning the reliability of the individual components, all of which assume the components function independently, are considered. The case for which the initial information is in terms of the system's reliability is briefly considered and related to the aforementioned case
- Published
- 1974
27. Sample Size Allocation in Two-Phase Sampling
- Author
-
Bahadur Singh and Joseph Sedransk
- Subjects
Statistics and Probability ,Sample size determination ,Modeling and Simulation ,Statistics ,Poisson sampling ,Sampling (statistics) ,Cluster sampling ,Bernoulli sampling ,Sample (statistics) ,Sampling fraction ,Simple random sample ,Mathematics - Abstract
When it is desired to estimate parameters of subpopulations which are not "identifiable in advance" one may use a two phase sample design. That is, one selects a large, preliminary ("first phase")sample and identifies the subpopulation to which each element belongs Thens for subpopulation j, a subsample is selectedfrom the elements identified in the first phase sample as being members of j. Finally, the variable of interest is measured for each element in this "second phase" sample In this paperweconsider both simple random and stratified random sampling at the first phase while (in each case) simple random sampling is assumed at the second phase.Two types of sample size allocation problem are investigated: (1) determination of the sample size(s) at the first phase,and (2) determination of the second phase sample sizes given the results of the first phase sample.
- Published
- 1974
28. Variability of magnitude estimates: A timing theory analysis
- Author
-
David M. Greem and R. Duncan Luce
- Subjects
Theory analysis ,Distribution (mathematics) ,Sample size determination ,Coefficient of variation ,Statistics ,Magnitude (mathematics) ,Experimental and Cognitive Psychology ,Intensity ratio ,Power function ,Signal ,Sensory Systems ,General Psychology ,Mathematics - Abstract
Three procedures for magnitude estimation were investigated, and a sufficient number of responses were obtained to make reasonable estimates of both the mean and variance of the responses. The conventional magnitude estimate procedure, without a standard signal, appeared to produce the most sensible data. The best method of establishing the central tendency of the data appears to be the plot of the mean ratio of successive responses against the intensity ratio of the corresponding signal intensities. When this is done, the average response ratio increases roughly as a power function of the signal ratios. The coefficient of variation, σ/m, varies from about 0.1 for small signal ratios and increases to 0.3 at about 20 dB and greater signal separations. The distribution of response ratios appears to be reasonably well approximated by a beta distribution. The change in σ/m with signal ratio is suggestive of an attention mechanism in which the sample size depends on the location of the attention band. The ratio estimation procedure suffers badly from discrete number tendencies.
- Published
- 1974
29. Intrasire and Intracow Regressions of Lactation Records on Herdmate Performance: Use, Estimators, and Biases
- Author
-
I.L. Mao
- Subjects
Daughter ,media_common.quotation_subject ,Sire ,Estimator ,Regression ,Sample size determination ,Statistics ,Linear regression ,Consistent estimator ,Genetics ,Econometrics ,Herd ,Animal Science and Zoology ,Food Science ,media_common ,Mathematics - Abstract
Herd-year-season effects are not individually adjustable and yet contribute nearly 50% of the total variation in production. Herdmate comparison method was designed to adjust records for this major source of environmental differences without knowledge of individual influences. Essentially, in evaluating sires the sire of a daughter group is the genetic unit, and daughter averages are regressed on their adjusted herdmate averages. In cow evaluation, however, the genetic unit is a cow; therefore, an intracow regression of her performance on her adjusted herdmate averages is pertinent. Intracow regression coefficient (b c ) is difficult to estimate, since only those cows making records in more than one herd contribute anything to the estimation. Estimates for intrasire regression coefficient (b s ) are available. The relationship between b c and b s is b c = 2b s −1 which provides a consistent estimator for b c with the bias factor being simply the inverse of the sample size for estimating b s . The value .9 is commonly used for bs in the sire proofs. The value for b c in cow index procedure for the similar purpose is then .8.
- Published
- 1974
30. Approximately Optimum Confidence Bounds for System Reliability Based on Component Test Data
- Author
-
Nancy R. Mann and Frank E. Grubbs
- Subjects
Statistics and Probability ,Exponential distribution ,Applied Mathematics ,Complex system ,Confidence interval ,Exponential function ,Sample size determination ,Modeling and Simulation ,Censoring (clinical trials) ,Statistics ,Applied mathematics ,Data system ,Test data ,Mathematics - Abstract
A unified treatment of three different models concerning the problem of obtaining suitably accurate confidence bounds on series or parallel system reliability from subsystem test data is explored and developed. The component or subsystem test data are assumed to be (1) exponentially distributed with censoring or truncation for a fixed number of failures, (2) exponentially distributed with truncation of tests at fixed times, and (3) binomially distributed (pass-fail) with fixed but different sample sizes and random numbers of failures for subsystem tests. Rather unique relations between the three models are found and discussed based on the binomial reliability study of hlann (1973). In fact, the approximate theory developed herein applies to “mixed” data systems, i.e. the case where some subsystem data are binomial and the others exponential in character. The extension of results to complex systems is also treated. The methodology developed for combining component failure data should perhaps be useful in p...
- Published
- 1974
31. Further Cross-Validation Analysis of the Bayesian m-Group Regression Method
- Author
-
Paul H. Jackson and Melvin R. Novick
- Subjects
Proper linear model ,05 social sciences ,050401 social sciences methods ,050301 education ,Prediction interval ,Regression analysis ,Cross-validation ,Education ,0504 sociology ,Sample size determination ,Bayesian multivariate linear regression ,Statistics ,Econometrics ,0503 education ,Regression diagnostic ,Unit-weighted regression ,Mathematics - Abstract
Novick, Jackson, Thayer, and Cole (1972) conducted a cross-validation study of the Bayesian m-group regression method developed by Jackson, Novick, and Thayer (1971) based on a theory described in Lindley and Smith (1972). The context of this cross validation was the use of ACT Assessment Scores to predict first semester grade point averages in traditional junior colleges. Within-group least squares regression lines were calculated in each of 22 carefully selected, academically oriented junior colleges using 1968 data and these lines were used for prediction on 1969 data. The principal focus of accuracy of prediction was on the average over colleges of the mean-squared errors when the predictions for persons were compared with their actual attained grade point averages. The primary conclusions of the study were based on a 25% within-college sample. The sample sizes ranged from 26 to 184. The average, over the 22 colleges, of the mean-squared errors for the within-college regression was .62. This represents the average result to be expected if each college did its own work using only information from that college. There has always been the thought that some improvement on withincollege least squares could be attained by some central prediction system. However, we are unaware of a single example in the literature where a large scale cross-validation study has substantially supported this contention. In the instances we know of, cross validation has either not been done, has not been done successfully, or has not been done on a sufficient scale to clearly establish the general validity of the system used. We assume that there have
- Published
- 1974
32. An Asymptotic Expansion of the Distribution of the Limited Information Maximum Likelihood Estimate of a Coefficient in a Simultaneous Equation System
- Author
-
T. W. Anderson
- Subjects
Statistics and Probability ,Predetermined variables ,Distribution (mathematics) ,Simultaneous equations ,Sample size determination ,Statistics ,Applied mathematics ,Endogeneity ,Statistics, Probability and Uncertainty ,Asymptotic expansion ,Least squares ,Mathematics ,Noncentrality parameter - Abstract
An asymptotic expansion is made of the distribution of the limited information maximum likelihood estimate of the coefficient of one endogenous variable in an equation with two endogenous variables when the coefficient of the other endogenous variable is prescribed to be unity. The equation is one of a system of simultaneous equations, and all the predetermined variables in the system are assumed to be exogenous. The expansion in terms of the noncentrality parameter, which increases with the sample size, is carried out to the order of the negative power of the noncentrality parameter (i.e., four terms). Comparison is made with the expansion of the distribution of the two-stage least squares estimate.
- Published
- 1974
33. Statistical analysis of water temperature residuals
- Author
-
Leland L. Long and Billy E. Gillett
- Subjects
Data point ,Yield (engineering) ,Sample size determination ,Harmonics ,Statistics ,Harmonic ,Residual ,Fourier series ,Regression ,Water Science and Technology ,Mathematics - Abstract
Water temperature residuals, which are taken as the difference between the daily average water temperature (Missouri River, Boonville, Missouri) and the Fourier series regression fit to the data points, are analyzed. A 1-yr record length is used as a base period. The Kolmogorov-Smirnov test is used to determine, for a given number of extracted harmonics, the sample size required to yield a set of normal random variates with mean zero. It is also shown that with only the fundamental harmonic removed, a sample size of at least 180 is required to yield a sample average residual of less than 0.2°F. When the fundamental and higher order harmonics are removed, this sample size is reduced considerably.
- Published
- 1974
34. MEASURING AGREEMENT WHEN TWO OBSERVERS CLASSIFY PEOPLE INTO CATEGORIES NOT DEFINED IN ADVANCE
- Author
-
Richard J. Light and Robert L. Brennan
- Subjects
Statistics and Probability ,Contrast (statistics) ,General Medicine ,Normal distribution ,Level of measurement ,Arts and Humanities (miscellaneous) ,Sample size determination ,Statistics ,Test statistic ,Null hypothesis ,General Psychology ,Statistic ,Mathematics ,Statistical hypothesis testing - Abstract
Basic to many psychological investigations is the question of agreement between observers who independently categorize people. Several recent studies have proposed measures of agreement when a set of nominal scale categories has been predefined and imposed on two observers. This study, in contrast, develops a measure of agreement for settings where observers independently define their own categories. Thus it is possible for observers to delineate different numbers of categories, with different names. Computational formulae for the mean and variance of the proposed measure of agreement are given; further, a statistic with a large-sample normal distribution is suggested for testing the null hypothesis of random agreement. A computer-based comparison of the large-sample approximation with the exact distribution of the test statistic shows a generally good fit, even for moderate sample sizes. Finally, a worked example involving two psychologists' classifications of children illustrates the computations.
- Published
- 1974
35. 54—THE EFFECTS OF CONSTRUCTION PARAMETERS, SAMPLE SIZE, AND INCIDENT-SOUND LEVEL ON NOISE ABSORPTION BY CARPETING
- Author
-
K. Slater and E. Ann Nanson
- Subjects
Absorption (acoustics) ,animal structures ,Materials science ,Polymers and Plastics ,Materials Science (miscellaneous) ,Acoustics ,education ,Industrial and Manufacturing Engineering ,Noise ,Sample size determination ,Statistics ,General Agricultural and Biological Sciences ,Pile ,Test sample - Abstract
The influence of pile parameters (height, density, and weight) on echo-loss absorption by carpets is investigated by means of a swept-frequency technique. None of the three factors is found to be significant, and it is presumed that other parameters are more important in determining the acoustic behaviour of a carpet. The size of a test sample and the incident-noise level used in the test are shown not to affect the evaluation of sound-absorbing ability.
- Published
- 1974
36. SELECTING THE m POPULATIONS WITH LARGEST MEANS FROM k NORMAL POPULATIONS WITH UNKNOWN VARIANCES
- Author
-
W. K. Chiu
- Subjects
Statistics and Probability ,education.field_of_study ,Sample size determination ,Generalization ,Initial sample ,Population ,Statistics ,Sample (statistics) ,Variance (accounting) ,education ,Mathematics - Abstract
Summary This paper gives a two-sample procedure for selecting the m populations with the largest means from k normal populations with unknown variances. The method is a generalization of a recent work by Ofosu [1973] and hence should find wider practical applications. The experimenter takes an initial sample of preset size N0 from each population and computes an unbiased estimate of its variance. From this estimate he determines the second sample size for the population according to a table presented for this purpose. The populations associated with the m largest overall sample means will be selected. The procedure is shown to satisfy a confidence requirement similar to that of Ofosu.
- Published
- 1974
37. On Generating Random Variates from an Empirical Distribution
- Author
-
Yoshinori Asau and Hui-Chuan Chen
- Subjects
Convolution random number generator ,Sequence ,Random variate ,Distribution (number theory) ,Sample size determination ,Computation ,Statistics ,Empirical distribution function ,Industrial and Manufacturing Engineering ,Antithetic variates ,Mathematics - Abstract
This note presents a method for generating a sequence of random variates from an empirical distribution. Computational results show that the proposed method requires less computation time than two standard methods but requires only ten more words of memory. The savings in time becomes more significant as the number of distinct values contained in the distribution, or the sample size increases.
- Published
- 1974
38. A Program for Three-Factor Analysis of Variance Designs With Repeated Measures on Two Factors for the Ibm 1130 Computer
- Author
-
Jamal Abedi
- Subjects
Applied Mathematics ,Rounding ,IBM 1130 ,Sampling (statistics) ,Repeated measures design ,Education ,Term (time) ,One-way analysis of variance ,Sample size determination ,Statistics ,Developmental and Educational Psychology ,Econometrics ,Analysis of variance ,Applied Psychology ,Mathematics - Abstract
This program for three-factor analysis of variance with repeated measures on two factors is based on an unweighted means analysis, permitting the use of unequal sample sizes. Each factor may be fixed or random since the program will select an appropriate error term or will calculate a quasi- F ratio. The number of possible levels of each factor is extended over those in other programs. Rounding errors are minimized.
- Published
- 1974
39. A National Sample Design
- Author
-
Jon H. Pammett, Jane Jenson, Lawrence LeDuc, and Harold D. Clarke
- Subjects
Sociology and Political Science ,Sample size determination ,National election ,Political science ,Sampling design ,Range (statistics) ,Regional science ,National level ,Sample (statistics) ,Survey research ,Diversity (business) - Abstract
The construction of national survey samples in Canada involves a number of theoretical and practical problems that range well beyond purely statistical considerations. Sample sizes for instance will tend to be large in comparison with those employed in many other nations not for greater accuracy but primarily because of the sheer diversity and geographical dispersion of the Canadian population. Likewise, the high cost of survey research at the national level in Canada virtually dictates that substantial efforts be made to achieve an “optimum” design for any single study. The purpose of this note therefore is to outline in brief the several considerations which underlay the design of a national sample for the post-election survey which will be carried out under our direction following the federal election of July 1974. It is offered as a commentary on the steps that were taken to provide a sample design consistent with the research focus of the project. The design has a number of important implications for both primary and secondary analysis of what will be the third national election study to be carried out in Canada. Each of these studies has followed particular theoretical interests, in all cases having important implications for potential use of the data set by other analysts.
- Published
- 1974
40. Tables for Determining Expected Cost Per Unit under MIL-STD-105D Single Sampling Schemes
- Author
-
Gerald G. Brown, Herbert C. Rutemiller, and Operations Research (OR)
- Subjects
Engineering ,Acceptance sampling ,Sample size determination ,business.industry ,Statistics ,Sampling (statistics) ,Fraction (mathematics) ,Lot quality assurance sampling ,Expected value ,business ,Quality assurance ,Industrial and Manufacturing Engineering ,Term (time) - Abstract
AIIE Transactions, 6, pp. 135-142. When a MIL-STD-IOSD sampling scheme is used for a long period, some lots will be subjected to normal, some to reduced, and some to tightened inspection. This paper provides for several single sampling plans and various quality levels,'the expected fraction of lots rejected,the expected sample size per lot, and the expected number of lots to be processed before sampling inspection must be discontinued. Eq.uations are given to calculate the long term cost of sampling inspection using these expected values and appropriate cost parameters.
- Published
- 1974
41. On the choice of the class intervals in the application of the chi-square test
- Author
-
Benno Schorr
- Subjects
Combinatorics ,Control and Optimization ,Distribution function ,Uniform norm ,Sample size determination ,Applied Mathematics ,Norm (mathematics) ,Mathematical analysis ,Chi-square test ,Management Science and Operations Research ,Mathematics - Abstract
MANN and WALD [3] have given, for the first time, a formula for the choice of the optimal number k of equi-probable class intervals for the x 2-test.This formula leads to relatively high k for different numbers of the sample size. For the determination of k they used the supremum norm in the space of distribution functions as distance notion.BEIER-KUchler and NEUMANN [5] have improved these results and they found out that k can be reduced. In our paper a different norm in the space of distribution functions was used and the results are in good agreement with those of BEIER-KUchler and NEUMANN.
- Published
- 1974
42. Mathematical Probability and Statistics Computer Laboratory
- Author
-
Elliot A. Tanis
- Subjects
Mathematics (miscellaneous) ,Computer program ,Sample size determination ,Applied Mathematics ,Computer laboratory ,Statistics ,Computer-Assisted Instruction ,Dice ,Probability and statistics ,Education ,Mathematical probability ,Mathematics - Abstract
Summary Some concepts in probability and statistics can be illustrated with random experiments which use dice, cards, and urns containing colored beads. However the nature of such experiments limits their applicability and often dictates that sample sizes must be small. The computer allows us to efficiently simulate many different types of random experiments. In order for a student to write a computer program which will simulate a physical experiment, he is required to understand the experiment and available simulation techniques. It is also possible for a student to simulate theoretical results and in so doing he gains a better understanding and appreciation of the theory. Laboratory exercises have been written for a year‐long course in mathematical probability and statistics. Some of the exercises illustrate the simulation of physical experiments, while others help the student understand theoretical concepts. The students are encouraged to experiment. For example, if certain hypotheses in a theorem are ...
- Published
- 1974
43. On the Power of the One-Sided Kolmogorov Test When the Sample Size is Small
- Author
-
V. A. Epanecnikov
- Subjects
Statistics and Probability ,symbols.namesake ,Sample size determination ,One sided ,Mathematical analysis ,symbols ,Statistics, Probability and Uncertainty ,Kolmogorov–Smirnov test ,Mathematics ,Power (physics) - Published
- 1974
44. Required sample sizes in clinical trials for lung cancer treatment results that the survival-time patterns of patients have lognormality
- Author
-
Nobuo Yamashita
- Subjects
Pulmonary and Respiratory Medicine ,Oncology ,medicine.medical_specialty ,business.industry ,Treatment results ,medicine.disease ,Clinical trial ,Time pattern ,Sample size determination ,Internal medicine ,medicine ,business ,Lung cancer - Published
- 1974
45. Analysis of Sudden Death Tests of Bearing Endurance
- Author
-
J. I. McCool
- Subjects
Bearing (mechanical) ,business.industry ,Mechanical Engineering ,Maximum likelihood ,General Engineering ,Surfaces and Interfaces ,Sudden death ,Confidence interval ,Surfaces, Coatings and Films ,Test (assessment) ,law.invention ,Mechanics of Materials ,law ,Sample size determination ,Statistics ,Forensic engineering ,Medicine ,business - Abstract
In sudden death testing, a group of bearings is divided into a number of equal-sized subgroups and tested only until the first failure occurs in each subgroup. General procedures based on maximum likelihood estimates are given for setting confidence limits on L10 life from sudden death test results, and the necessary tabular values are given for some sample and subgroup sizes. Comparisons are made of the total expected test time and the precision in determining L10 by (a) a sudden death test and by (b) a conventional endurance test having the same total sample size and number of failures. Presented at the 28th ASLE Annual Meeting in Chicago, Illinois, April 30–May 3, 1973
- Published
- 1974
46. A comparison of sequential sampling procedures for selecting the better of two binomial populations
- Author
-
D. H. Young and H.R. Taheri
- Subjects
Statistics and Probability ,education.field_of_study ,Applied Mathematics ,General Mathematics ,Population ,Sampling (statistics) ,Absolute difference ,Simple random sample ,Agricultural and Biological Sciences (miscellaneous) ,Sample size determination ,Statistics ,Econometrics ,Cluster sampling ,Statistics, Probability and Uncertainty ,General Agricultural and Biological Sciences ,education ,Selection (genetic algorithm) ,Importance sampling ,Mathematics - Abstract
SUMMARY A comparison is made between play-the-winner sampling and vector-at-a-time sampling for selecting the better of two binomial populations when the selection requirement is defined in terms of the ratio of the single trial probabilities of success rather than the difference. Thus the probability of correct selection is to be at least P* when the true ratio of probabilities of success 0 is at least 0*, where P* and 0* are prescribed. Termination rules based on the difference in the number of successes and inverse sampling are considered. It is shown that play-the-winner sampling is uniformly preferable to vector-at-a-time sampling for both termination rules, and that for 0* close to one the success lead termination rule leads to a smaller expected number of trials on the poorer treatment unless 0 is even closer to one. The problem of selecting the better of two binomial populations, i.e. the one with the higher probability of success p on a single trial, has usually been stated using the framework of ranking and selection procedures as follows. If P* and A* are preassigned constants, with < P* < 1 and 0 < A* < 1, the probability of a correct selection is to be at least P* when the true difference in the p-values is at least A*. Sobel & Weiss (1970) consider two sequential procedures for this selection problem in which sampling is terminated when the absolute difference in the number of successes for the two populations first reaches a pre- determined integer. We will refer to this as the success-lead rule for termination. In the first procedure, play-the-winner sampling is used in which sampling continues with the same population after each success and switches to the other population after each failure. In the second procedure, vector-at-a-time sampling is used in which two observations are taken at each stage, one from each population. In many contexts, it is desirable to have a small expected number of trials on the poorer population or treatment but using the success-lead rule it is not true uniformly in the parameter space for the p's that play-the- winner sampling is superior to vector-at-a-time sampling in this respect. In a second paper Sobel & Weiss (1971) consider the use of an inverse sampling rule when sampling is termi- nated when any one population has r successes. They show that in this case play-the-winner sampling is uniformly preferable to vector-at-a-time sampling in the sense that for the same probability requirement, the expected number of trials on the poorer treatment is always smaller. Truncated versions of the inverse sampling rule and the success-lead rule for play- the-winner sampling were proposed by Hoel (1972) and Fushimi (1973). Their stopping rules include the number of failures as well as the number of successes and mean that excessive sample sizes are avoided when the population probabilities of success are small.
- Published
- 1974
47. Some moments of an estimate of shannon's measure of information
- Author
-
K. Hutcheson and L. R. Shenton
- Subjects
Statistics and Probability ,Sample size determination ,Modeling and Simulation ,Statistics ,Table (database) ,Multinomial distribution ,Trinomial ,Measure (mathematics) ,Mathematics - Abstract
An underlying multinomial distribution is assumed and from this the first two moments of an estimate of Shannon's measure of information are derived in closed form. A table of the first two moments for varying sample size is given for a trinomial
- Published
- 1974
48. Effect of Flow Rate Disturbance of Carrier Gas on Quantitative Analysis in Gas Chromatographic Column Containing Sample
- Author
-
Ichiro Takeda
- Subjects
Viscosity ,Chromatography ,Chemistry ,Elution ,Sample size determination ,Phase (matter) ,Detector ,Analytical chemistry ,General Chemistry ,Gas chromatography ,Quantitative analysis (chemistry) ,Volumetric flow rate - Abstract
For precise quantitative analysis of gas chromatography using TC detector, constant carrier gas flow rate should be maintained. But two effects had been known to disturb the flow rate and produced errots in quantitative analysis. First, the effeet of difference in viscosity between the sample zone and the pure carrier gas; second, the effect of the carrier gas flow rate increase accompanied by the sample peak elution, due to the appearance of sample vapor from the stationaly phase, at the outlet column. The second effect was discussed theoretically by assuming peak shape to be triangle.To diminish these errors, two methods were tested and compared them with the usual method (A); the two methods were the method (B) in which gradiently decreasing stationaly phase column (3 mmφ X75 cm) was connected between main eolumn outlet and detector, and the method (C) in which long empty delay tube (3 mmφ X75cm) was connected between main column and detector.When the sample size increased, the percentage of the main peak integral decreased as if it were effected by non-linearity of the detector, as shown in Fig. IN4. But, the percentage was less affected by the sample size in the methods (B) and (C) than that (A), except the case of n-octane which had very short retention time.It was found that the decrease in the percentage, as the sample size increases, in the usual method (A) was mainly due to the second effect from theoretical calculation.The theoretical calculation for some cases showed that the error due to the second effect might reached a few percent in routine analysis as shown in Table 1.
- Published
- 1974
49. Principles and pitfalls in establishing normal electrocardiographic limits
- Author
-
Ernst Simonson
- Subjects
Percentile ,medicine.medical_specialty ,Heart Diseases ,Sample (material) ,Statistics as Topic ,Adult population ,Body weight ,Sampling Studies ,Standard deviation ,Electrocardiography ,Internal medicine ,Statistics ,Humans ,Mass Screening ,Medicine ,Diagnostic Errors ,Extreme value theory ,Mass screening ,Analysis of Variance ,business.industry ,United States ,Sample size determination ,Population Surveillance ,Cardiology ,Cardiology and Cardiovascular Medicine ,business - Abstract
The three main prerequisites for determination of valid normal limits are adequate sample size, sample composition representative of the average healthy population, and adequate statistical evaluation. In a substantial number of publications, even recent publications, one or the other of these principles (and occasionally all three) have been disregarded. The most common error is inadequate sample size. The minimal sample size of a male adult population for determination of normal limits should exceed 500, and approach 1,000 for a mixed sample of men and women. Constitutional variables affecting the electrocardiogram (relative body weight, age, sex and race) must be considered. In view of the skewed distribution of most electrocardiographic items, 95 or 98 percentiles should be used for determination of valid normal limits. Use of standard deviations (usually ± 2 SD) may result in erroneous limits. The extreme values of the distribution of a sample of healthy persons are not acceptable because they include values from a substantial number of abnormal persons, thus resulting in a large number of false normal diagnoses.
- Published
- 1974
50. Analysis of selection in populations observed over a sequence of consecutive generations
- Author
-
Alan R. Templeton
- Subjects
Genetics ,education.field_of_study ,Population size ,Population ,Statistical model ,General Medicine ,Biology ,Marginal likelihood ,Sample size determination ,Statistics ,Marginal distribution ,Spurious relationship ,education ,Agronomy and Crop Science ,Biotechnology ,Statistical hypothesis testing - Abstract
A statistical model is presented for dealing with genotypic frequency data obtained from a single population observed over a run of consecutive generations. This model takes into account possible correlations that exist between generations by conditioning the marginal probability distribution of any one generation on the previously observed generation. Maximum likelihood estimates of the fitness parameters are derived and a hypothesis testing framework developed. The model is very general, and in this paper is applied to random-mating, selfing, parthenogenetic and mixed random-mating and selfing populations with respect to a single locus, g-allele model with constant genotypic fitness differences with all selection occurring either before or after sampling. The assumptions behind this model are contrasted with those of alternative techniques such as minimum chi-square or "unconditional" maximum likelihood estimation when the marginal likelihoods for any one generation are conditioned only on the initial conditions and not the previous generation. The conditional model is most appropriate when the sample size per generation is large either in an absolute sense or in relation to the total population size. Minimum chi-square and the unconditional likelihood are most appropriate when the population size is effectively infinite and the samples are small. Both models are appropriate when the samples are large and the population size is effectively infinite. Under these last conditions, the conditional model may be preferred because it has greater robustness with respect to small deviations from the underlying assumptions and has a greater simplicity of form. Furthermore, if any genetic drift occurs in the experiment, the minimum chi-square and unconditional likelihood approaches can create spurious evidence for selection while the conditional approach will not. Worked examples are presented.
- Published
- 1974
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.