38 results on '"Double sampling"'
Search Results
2. An integrated‐likelihood‐ratio confidence interval for a proportion based on underreported and infallible data
- Author
-
Phil D. Young, Dean M. Young, Briceön Wiley, and Chris Elrod
- Subjects
Statistics and Probability ,Double sampling ,Statistics ,Coverage probability ,Statistics, Probability and Uncertainty ,Marginal likelihood ,Confidence interval ,Mathematics ,Numerical integration - Published
- 2021
- Full Text
- View/download PDF
3. Estimation of Poisson mean with under‐reported counts: a double sampling approach
- Author
-
Tathagata Banerjee, Debjit Sengupta, and Surupa Roy
- Subjects
Statistics and Probability ,Estimation ,Maximum likelihood ,Estimator ,Poisson distribution ,Confidence interval ,symbols.namesake ,Double sampling ,Mortality data ,Statistics ,symbols ,Statistics, Probability and Uncertainty ,Mathematics ,Count data - Abstract
Count data arising in various fields of applications are often under‐reported. Ignoring undercount naturally leads to biased estimators and inaccurate confidence intervals. In the presence of undercount, in this paper, we develop likelihood‐based methodologies for estimation of mean using validation data. The asymptotic distributions of the competing estimators of the mean are derived. The impact of ignoring undercount on the coverage and length of the confidence intervals is investigated using extensive numerical studies. Finally an analysis of heat mortality data is presented.
- Published
- 2020
- Full Text
- View/download PDF
4. A 15‐MHz bandwidth double sampling MASH2 5b ‐1 5b sigma‐delta modulator with DEM for multibit DACs
- Author
-
Qidong Zhang, Yani Li, Chunlong Fei, Di Li, Xiaopeng Wu, and Yintang Yang
- Subjects
Physics ,Double sampling ,Applied Mathematics ,Bandwidth (signal processing) ,Electronic engineering ,Electrical and Electronic Engineering ,Delta-sigma modulation ,Computer Science Applications ,Electronic, Optical and Magnetic Materials - Published
- 2019
- Full Text
- View/download PDF
5. Refined double sampling scheme with measures and application
- Author
-
Abubaker Aidroos Khired, Muhammad Aslam, and Saeed A. Dobbah
- Subjects
Statistics and Probability ,Scheme (programming language) ,Average run length ,Double sampling ,Control chart ,Statistics, Probability and Uncertainty ,computer ,Algorithm ,computer.programming_language ,Mathematics - Published
- 2021
- Full Text
- View/download PDF
6. Choosing profile double-sampling designs for survival estimation with application to President's Emergency Plan for AIDS Relief evaluation
- Author
-
Constantin T. Yiannoutsos, Constantine Frangakis, and Ming Wen An
- Subjects
Statistics and Probability ,Research design ,Epidemiology ,business.industry ,media_common.quotation_subject ,Emergency plan ,medicine.disease ,Acquired immunodeficiency syndrome (AIDS) ,Double sampling ,Survival function ,Covariate ,Statistics ,Medicine ,Function (engineering) ,business ,Dropout (neural networks) ,media_common - Abstract
Most studies that follow subjects over time are challenged by having some subjects who dropout. Double-sampling is a design that selects, and devotes resources to intensively pursue and find a subset of these dropouts; then uses data obtained from these to adjust naive estimates, which are potentially biased by the dropout. Existing methods to estimate survival from double-sampling assume a random sample. In limited-resource settings however, generating accurate estimates using a minimum of resources is important. We propose using double-sampling designs that oversample certain profiles of dropouts as more efficient alternatives to random designs. First, we develop a framework to estimate the survival function under these profile double-sampling designs. We then derive the precision of these designs as a function of the rule for selecting different profiles, in order to identify more efficient designs. We illustrate using data from a United States President’s Emergency Plan for AIDS Relief (PEPFAR)-funded HIV care and treatment program in western Kenya. Our results show why and how more efficient designs should oversample patients with shorter dropout times. Further, our work suggests generalizable practice for more efficient double-sampling designs, which can help maximize efficiency in resource-limited settings.
- Published
- 2014
- Full Text
- View/download PDF
7. Optimal T2 Control Chart with a Double Sampling Scheme - An Alternative to the MEWMA Chart
- Author
-
Alireza Faraz, Cédric Heuchenne, and Erwin M. Saniga
- Subjects
Scheme (programming language) ,Engineering ,Statistical design ,business.industry ,Sampling (statistics) ,Management Science and Operations Research ,Variable ratio ,Double sampling ,Chart ,Statistics ,Control chart ,Safety, Risk, Reliability and Quality ,business ,computer ,Sampling interval ,computer.programming_language - Abstract
Recent studies have shown that a double sampling (DS) scheme yields improvements in detection times of process shifts over variable ratio sampling (VRS) methods that have been extensively studied in the literature. Additionally, a DS scheme is more practical than some of the VRS methods since the sampling interval is fixed. In this paper, we investigate the effect of double sampling on cost, a criterion as important as detection rate. We study economic statistical design of the DS T2 chart (ESD DS T2) so that designs are found that are economically optimal but yet meet desired statistical properties such as having low probabilities of false searches and high probabilities of rapid detection of process shifts. Through an illustrative example, we show that relatively large benefits can be achieved in a comparison with the classical T2 chart and the statistical DS T2 charts with our ESD DS T2 approach. Furthermore, the economic performance of the ESD DS T2 charts is favorably compared to the MEWMA and other VRS T2 control charts in the literature. Copyright © 2011 John Wiley & Sons, Ltd.
- Published
- 2011
- Full Text
- View/download PDF
8. General Principles of Estimation
- Author
-
Michael J. Conroy and John P. Carroll
- Subjects
Estimation ,Double sampling ,Computer science ,Data mining ,computer.software_genre ,computer - Published
- 2009
- Full Text
- View/download PDF
9. The Need for Double-Sampling Designs in Survival Studies: An Application to Monitor PEPFAR
- Author
-
Constantin T. Yiannoutsos, Ming Wen An, Beverly S. Musick, and Constantine Frangakis
- Subjects
Statistics and Probability ,Research design ,Human immunodeficiency virus (HIV) ,medicine.disease_cause ,Article ,General Biochemistry, Genetics and Molecular Biology ,Bias ,Acquired immunodeficiency syndrome (AIDS) ,medicine ,Humans ,Dropout (neural networks) ,Acquired Immunodeficiency Syndrome ,Models, Statistical ,Actuarial science ,Data collection ,General Immunology and Microbiology ,business.industry ,Applied Mathematics ,Emergency plan ,General Medicine ,medicine.disease ,Objective Evidence ,Survival Analysis ,Double sampling ,Research Design ,General Agricultural and Biological Sciences ,business - Abstract
In 2007, there were 33.2 million people around the world living with HIV/AIDS (UNAIDS/WHO, 2007). In May 2003, the U.S. President announced a global program, known as the President's Emergency Plan for AIDS Relief (PEPFAR), to address this epidemic. We seek to estimate patient mortality in PEPFAR in an effort to monitor and evaluate this program. This effort, however, is hampered by loss to follow-up that occurs at very high rates. As a consequence, standard survival data and analysis on observed nondropout data are generally biased, and provide no objective evidence to correct the potential bias. In this article, we apply double-sampling designs and methodology to PEPFAR data, and we obtain substantially different and more plausible estimates compared with standard methods (1-year mortality estimate of 9.6% compared to 1.7%). The results indicate that a double-sampling design is critical in providing objective evidence of possible nonignorable dropout and, thus, in obtaining accurate data in PEPFAR. Moreover, we show the need for appropriate analysis methods coupled with double-sampling designs.
- Published
- 2008
- Full Text
- View/download PDF
10. Double sampling hotelling's T2 charts
- Author
-
Francisco Aparisi and Charles W. Champ
- Subjects
Engineering ,Double sampling ,Chart ,Average run length ,business.industry ,Statistics ,Genetic algorithm ,Control chart ,Sample (statistics) ,Management Science and Operations Research ,Safety, Risk, Reliability and Quality ,business ,Statistical process control - Abstract
Two double sampling T2 charts are discussed. They only differ in how the second sample is used to suggest to the practitioner the state of the process. An optimal method using a genetic algorithm is given for designing these charts based on the average run length (ARL). An analytical method is used to determine run length performance of the chart. Comparisons are made with various other control charting procedures. Some recommendations are given. Copyright © 2007 John Wiley & Sons, Ltd.
- Published
- 2008
- Full Text
- View/download PDF
11. Relative Frequency Estimation in Multiple Outcome Measurement with Misclassifications
- Author
-
E. T. Jolayemi
- Subjects
Statistics and Probability ,Multiple outcome ,Estimation ,Variable (computer science) ,Double sampling ,Statistics ,Sampling (statistics) ,Classification scheme ,General Medicine ,Statistics, Probability and Uncertainty ,Frequency ,Square (algebra) ,Mathematics - Abstract
Fallible classification scheme is known to produce inconsistent relative frequency estimates from a polychotomous variable. These estimates however can be improved upon by the use of a sub-sample that results in a square table. In this research we highlight the method of improvement.
- Published
- 2007
- Full Text
- View/download PDF
12. A BAYESIAN HIERARCHICAL MODEL FOR POISSON RATE AND REPORTING-PROBABILITY INFERENCE USING DOUBLE SAMPLING
- Author
-
James D. Stamey, Doyle H. Boese, and Dean M. Young
- Subjects
Statistics and Probability ,Observational error ,Inference ,Poisson distribution ,Bayesian statistics ,symbols.namesake ,Bayes' theorem ,Double sampling ,Statistics ,symbols ,Bayesian hierarchical modeling ,Statistics, Probability and Uncertainty ,Mathematics ,Count data - Abstract
Summary We analyse a combination of errant count data subject to under-reported counts and inerrant count data to estimate multiple Poisson rates and reporting probabilities of cervical cancer for four European countries. Our analysis uses a Bayesian hierarchical model. Using a simulation study, we demonstrate the efficacy of our new simultaneous inference method and compare the utility of our method with an empirical Bayes approach developed by Fader and Hardie (J. Appl. Statist., 2000).
- Published
- 2006
- Full Text
- View/download PDF
13. Attributes Sampling Schemes in International Standards
- Author
-
Saminathan Balamurali, Chi‐Hyuck Jun, and Rainer Göb
- Subjects
Engineering ,Operations research ,Process (engineering) ,business.industry ,media_common.quotation_subject ,Sampling (statistics) ,Double sampling ,Multiple sampling ,Quality (business) ,Operations management ,Lot quality assurance sampling ,Sequential sampling ,business ,media_common ,Acceptable quality limit - Abstract
Sampling tables for the inspection of attribute quality characteristics were developed during World War II by the United States Department of Defense. In 1950, these tables led to the first military standard MIL-STD-105A which evolved into the latest version MIL-STD-105E issued in 1989. Further national and international standards emerged as derivatives of MIL-STD. On international level, the International Standards Organization (ISO) introduced the attributes sampling standard ISO 2859 which has been adopted as a national standard in many countries. The article analyzes the purpose, structure, and operation of MIL-STD-105E and of ISO 2859. Keywords: attributes sampling; sampling standard; MIL-STD-105E; ISO 2859; process proportion nonconforming; acceptable quality level; limiting quality; single sampling; double sampling; multiple sampling; switching rules; sequential sampling
- Published
- 2014
- Full Text
- View/download PDF
14. Double Sampling in Industrial Sampling Inspection
- Author
-
I. D. Hill
- Subjects
Sampling inspection ,Double sampling ,Computer science ,Remote sensing - Published
- 2014
- Full Text
- View/download PDF
15. Median Estimation Using Double Sampling
- Author
-
Anwar H. Joarder, Sarjinder Singh, and Derrick S. Tracy
- Subjects
Statistics and Probability ,Minimum-variance unbiased estimator ,Double sampling ,Sample size determination ,Statistics ,Sampling design ,Estimator ,Statistics, Probability and Uncertainty ,Regression ,Importance sampling ,Mathematics ,Stratified sampling - Abstract
This paper proposes a general class of estimators for estimating the median in double sampling. The position estimator, stratification estimator and regression type estimator attain the minimum variance of the general class of estimators. The optimum values of the first-phase and second-phase sample sizes are also obtained for the fixed cost and the fixed variance cases. An empirical study examines the performance of the double sampling strategies for median estimation. Finally, an extension of the methods of Chen & Qin (1993) and Kuk & Mak (1994) is considered for the double sampling strategy.
- Published
- 2001
- Full Text
- View/download PDF
16. Goodness of Fit Tests with Misclassified Data Based on ϕ-Divergences
- Author
-
Leandro Pardo and K. Zografos
- Subjects
Statistics and Probability ,Goodness of fit ,Double sampling ,Statistics ,Econometrics ,General Medicine ,Statistics, Probability and Uncertainty ,Mathematics - Published
- 2000
- Full Text
- View/download PDF
17. Maximum likelihood estimation with missing outcomes: From simplicity to complexity.
- Author
-
Baker SG
- Subjects
- Computer Simulation, Data Interpretation, Statistical, Humans, Models, Statistical, Propensity Score, Randomized Controlled Trials as Topic, Survival Analysis, Bias, Likelihood Functions
- Abstract
Many clinical or prevention studies involve missing or censored outcomes. Maximum likelihood (ML) methods provide a conceptually straightforward approach to estimation when the outcome is partially missing. Methods of implementing ML methods range from the simple to the complex, depending on the type of data and the missing-data mechanism. Simple ML methods for ignorable missing-data mechanisms (when data are missing at random) include complete-case analysis, complete-case analysis with covariate adjustment, survival analysis with covariate adjustment, and analysis via propensity-to-be-missing scores. More complex ML methods for ignorable missing-data mechanisms include the analysis of longitudinal dropouts via a marginal model for continuous data or a conditional model for categorical data. A moderately complex ML method for categorical data with a saturated model and either ignorable or nonignorable missing-data mechanisms is a perfect fit analysis, an algebraic method involving closed-form estimates and variances. A complex and flexible ML method with categorical data and either ignorable or nonignorable missing-data mechanisms is the method of composite linear models, a matrix method requiring specialized software. Except for the method of composite linear models, which can involve challenging matrix specifications, the implementation of these ML methods ranges in difficulty from easy to moderate., (Published 2019. This article is a U.S. Government work and is in the public domain in the USA.)
- Published
- 2019
- Full Text
- View/download PDF
18. The partial testing design: a less costly way to test equivalence for sensitivity and specificity
- Author
-
Larry Kessler, Robert J. Connor, and Stuart G. Baker
- Subjects
Statistics and Probability ,medicine.diagnostic_test ,Epidemiology ,Computer science ,medicine.disease ,Paired design ,Breast cancer ,Double sampling ,Sample size determination ,Statistics ,medicine ,Mammography ,skin and connective tissue diseases ,Equivalence (measure theory) ,health care economics and organizations ,Statistical hypothesis testing ,Digital radiography - Abstract
We propose a new, less costly, design to test the equivalence of digital versus analogue mammography in terms of sensitivity and specificity. Because breast cancer is a rare event among asymptomatic women, the sample size for testing equivalence of sensitivity is larger than that for testing equivalence of specificity. Hence calculations of sample size are based on sensitivity. With the proposed design it is possible to achieve the same power as a completely paired design by increasing the number of less costly analogue mammograms and not giving the more expensive digital mammograms to some randomly selected subjects who are negative on the analogue mammogram. The key idea is that subjects who are negative on the analogue mammogram are unlikely to have cancer and hence contribute less information for estimating sensitivity than subjects who are positive on the analogue mammogram. To ascertain disease state among subjects not biopsied, we propose another analogue mammogram at a later time determined by a natural history model. The design differs from a double sampling design because it compares two imperfect tests instead of combining information from a perfect and imperfect test.
- Published
- 1998
- Full Text
- View/download PDF
19. <scp>M</scp> ultistage <scp>D</scp> esign
- Author
-
Mary C. Christman
- Subjects
Double sampling ,Ranked set sampling ,Statistics ,Encyclopedia ,Two stage sampling ,Cluster sampling ,Mathematics - Published
- 2012
- Full Text
- View/download PDF
20. Analysis of daily precipitation data by positive matrix factorization
- Author
-
Pentti Paatero and Sirkka Juntto
- Subjects
Statistics and Probability ,010504 meteorology & atmospheric sciences ,Ecological Modeling ,Function (mathematics) ,010501 environmental sciences ,01 natural sciences ,Chloride ,Standard deviation ,Factorization ,Double sampling ,Statistics ,medicine ,Seawater ,Nonnegative matrix ,Precipitation ,0105 earth and related environmental sciences ,Mathematics ,medicine.drug - Abstract
A new factor analysis method called positive matrix factorization (PMF) has been applied to daily precipitation data from four Finnish EMEP stations. The aim of the analysis was to investigate the structure of the data matrices in order to find the apparent source profiles from which the precipitation samples are constituted. A brief description of PMF is given. PMF utilizes the known uncertainty of data values. The standard deviations were derived from the results of double sampling at one station during one year. A goodness-of-fit function Q was calculated for factor solutions with 1–8 factors. The shape of the residuals was useful in deciding the number of factors. The strongest factor found was that of sea-salt. The most dominant ions in the factor were sodium, chloride and magnesium. At the coastal stations the ratio Cl/Na of the mean concentrations in the factor was near the ratio found in sea water but at the inland stations the ratio was smaller. For most ions more than 90 per cent of the weighted variation was explained. The worst explained was potassium (at worst c. 60 per cent) which is possibly due to contamination problems in the laboratory. In most factors of different factorizations the anions and cations were fairly well balanced.
- Published
- 1994
- Full Text
- View/download PDF
21. Estimation of Prevalence and Relative Risk in Repetitive Surveys
- Author
-
Padam Singh and I. M. S. Lamba
- Subjects
Statistics and Probability ,Estimation ,Double sampling ,Relative risk ,Statistics ,Estimator ,Sampling (statistics) ,General Medicine ,Statistics, Probability and Uncertainty ,Successive sampling ,Mathematics - Abstract
The Theory of double sampling as proposed by Neyman (1938) and subsequently used for successive sampling by Jesson (1942), Yates (1960), Patterson (1950), Eckler (1955), Kuldroff (1963) and Tikkiwal (1960, 1967) has been explored to develop a general estimator which can be used for estimation of parameters such as mean, ratio or double ratio. A simple case of sampling on two occasions has only been considered but the logic can easily be extended for more than two occasions. The results show that the generalised estimator will be very useful for the applied statisticians.
- Published
- 1993
- Full Text
- View/download PDF
22. Rejoinder to Discussions on Addressing an Idiosyncrasy in Estimating Survival Curves Using Double Sampling in the Presence of Self-Selected Right Censoring
- Author
-
Donald B. Rubin and Constantine Frangakis
- Subjects
Statistics and Probability ,medicine.medical_specialty ,Idiosyncrasy ,General Immunology and Microbiology ,Computer science ,Applied Mathematics ,Public health ,General Medicine ,Editorial board ,Missing data ,Censoring (statistics) ,General Biochemistry, Genetics and Molecular Biology ,Double sampling ,Econometrics ,medicine ,Suspect ,General Agricultural and Biological Sciences ,Survival analysis - Abstract
We thank the editorial board for the opportunity to have discussion on the issue of study design, which is often more important than analysis for obtaining reliable information, especially in problems with missing data. Double sampling designs to address dropout require allocating resources for recovering data for a subgroup of dropouts, but there are often positive trade-offs when doing so. Although such ideas have long been used for surveys with onetime enrollment, we have little evidence that they are being used systematically in studies in public health, where enrollment is often longitudinal. The conclusion is that, in these longitudinal settings, either double sampling is not often used or, as we suspect based on communications, it is employed implicitly, and the data are being analyzed with unknown methods. Our goals therefore were to
- Published
- 2001
- Full Text
- View/download PDF
23. Discussion of Double Sampling for Survival Analysis
- Author
-
Stuart G. Baker
- Subjects
Statistics and Probability ,Randomization ,General Immunology and Microbiology ,Random assignment ,Applied Mathematics ,Follow up studies ,General Medicine ,General Biochemistry, Genetics and Molecular Biology ,Double sampling ,Drop out ,Censoring (clinical trials) ,Statistics ,General Agricultural and Biological Sciences ,Survival analysis ,Dropout (neural networks) ,Mathematics - Abstract
Design In various types of biomedical studies, subjects drop out for reasons related to failure. For example, less healthy patients may seek treatment elsewhere or healthier subjects may be discharged from hospital prior to the end of the study. Typically, inference requires strong assumptions about the dropout mechanism (Baker, Wax, and Patterson, 1993) (BWP). The primary motivation of double sampling is to provide information to relax the distributional assumptions. There are two basic designs. Dropout double sampling. At the time of dropout, a subject is randomly assigned to follow-up or no follow-up. This is the design in FRangakis and Rubin (FR), where dropout was loss to follow-up, failure time was time from surgery to failure of prosthesis, and administrative censoring varied, depending on time of entry. Initial double sampling. At the start of the study, such as time of surgery or randomization to treatment assignment, subjects are randomly assigned to either a partial follow-up (PF) or full follow-up (FF) group. If a subject in the FF group drops out of the study, he is followed until failure or administrative censoring. If a subject in the partial followup group drops out, he is not followed after dropout. This is the design in BWP, where dropout was discharge from the hospital, failure time was time of infection following surgery, and administrative censoring was fixed at 30 days. Initial double sampling is a special case of dropout double sampling. Imagine that, as a result of initial double sampling, each subject is randomly assigned a label, either PF or FF, and when dropout occurs, subjects are randomly assigned to follow-up or not based on the PF and FF labels. The only difference with dropout double sampling is the additional random assignment to PF and FF among subjects who did not drop out. This extra information is not used in either the FR or BWP analysis.
- Published
- 2001
- Full Text
- View/download PDF
24. Estimating plant biomass: A review of techniques
- Author
-
C. J. Wheeler and W. R. Catchpole
- Subjects
Relative cost ,Biomass (ecology) ,Ecology ,Double sampling ,Homogeneous ,Direct sampling ,Sampling (statistics) ,Environmental science ,Visual estimation ,Vegetation ,Ecology, Evolution, Behavior and Systematics - Abstract
Many different techniques have been used to estimate biomass for ecological, agricultural and forestry research. The most suitable technique depends on available budget, accuracy required, structure and composition of the vegetation, and whether species and component biomass are required. A survey of the methods that have been used to estimate biomass is given, and the advantages and disadvantages of direct sampling, calibrated visual estimation and double sampling techniques are discussed. The relative cost and accuracy of each technique are summarized and recommendations are made for the use of the techniques in different vegetation complexes, such as discrete shrubs or trees, patchy vegetation, homogeneous vegetation, and species-rich inhomogeneous heathland.
- Published
- 1992
- Full Text
- View/download PDF
25. ESTIMATING BIOMASS IN A VEGETATION MOSAIC USING DOUBLE SAMPLING WITH REGRESSION
- Author
-
W.R. Catchpole and E.A. Catchpole
- Subjects
Statistics and Probability ,Biomass (ecology) ,Vegetation types ,Double sampling ,Vegetation type ,medicine ,Sampling (statistics) ,Soil science ,Quadrat ,medicine.symptom ,Vegetation (pathology) ,Regression ,Mathematics - Abstract
Summary In some vegetation types, total fuel loading (phytomass) can be predicted by easily-measured variables such as vegetation type and height. A double sampling scheme is proposed in which fuel loading is estimated on a particular site by using quadrat sampling within patches of similar vegetation to develop a general prediction equation, and then line intercept sampling is used to estimate the mean of the easily-measured variables on the site. This method is applied to estimate the total fine fuel loading on a heathland site.
- Published
- 1991
- Full Text
- View/download PDF
26. Optimum Allocation for Soil Contamination Investigations in Hsinchu, Taiwan, by Double Sampling
- Author
-
Ya Chi Chang and Hund-Der Yeh
- Subjects
Double sampling ,Sample size determination ,Environmental engineering ,Optimal allocation ,Mineralogy ,Soil Science ,Environmental science ,Heavy metals ,Standard methods ,Contamination ,Soil contamination - Abstract
The double sampling (DS) scheme is a cost-effective sampling method that combines an expensive measurement procedure with an inexpensive but less accurate one. Double sampling works when the true correlation of determination (p 2 ) between two techniques is known in advance, but that is hardly ever the case. By assuming a p 2 0.9. The main objective of this study was to extend the TSP and determine the optimum allocation of the samples under a fixed budget condition for the case p 2 > 0.9. A soil from Hsinchu, Taiwan, contaminated with heavy metals Zn, Cu, Pb, Ni, Cr, and Cd was sampled at 0.15, 0.3, 0.45, and 0.6 m deep at 36 sites (144 samples). All samples were analyzed with field-portable x-ray fluorescence and a subset of 40 samples was selected for analyses of Zn, Cu, and Pb with standard methods in the laboratory. Results from both measurements were linearly correlated with estimated p 2 values of 0.96, 0.95, and 0.97 for Zn, Cu, and Pb, respectively. Considering a p 2 = 0.99, the optimum subsample and sample sizes were 5 out of 167, 4 out of 173, and 5 out of 167 for Zn, Cu, and Pb, respectively. The extended TSP analyses reduced the number of superfluous samples to only two or three, which was less than obtained by TSP (9-16).
- Published
- 2007
- Full Text
- View/download PDF
27. Estimation of infection prevalence and sensitivity in a stratified two-stage sampling design employing highly specific diagnostic tests when there is no gold standard.
- Author
-
Miller E, Huppert A, Novikov I, Warburg A, Hailu A, Abbasi I, and Freedman LS
- Subjects
- Cohort Studies, Diagnostic Tests, Routine, Ethiopia epidemiology, Humans, Leishmaniasis, Visceral blood, Leishmaniasis, Visceral diagnosis, Leishmaniasis, Visceral epidemiology, Models, Statistical, Polymerase Chain Reaction, Prevalence, Probability, Sample Size, Sensitivity and Specificity, Bias, Communicable Diseases diagnosis, Communicable Diseases epidemiology, Epidemiologic Methods
- Abstract
In this work, we describe a two-stage sampling design to estimate the infection prevalence in a population. In the first stage, an imperfect diagnostic test was performed on a random sample of the population. In the second stage, a different imperfect test was performed in a stratified random sample of the first sample. To estimate infection prevalence, we assumed conditional independence between the diagnostic tests and develop method of moments estimators based on expectations of the proportions of people with positive and negative results on both tests that are functions of the tests' sensitivity, specificity, and the infection prevalence. A closed-form solution of the estimating equations was obtained assuming a specificity of 100% for both tests. We applied our method to estimate the infection prevalence of visceral leishmaniasis according to two quantitative polymerase chain reaction tests performed on blood samples taken from 4756 patients in northern Ethiopia. The sensitivities of the tests were also estimated, as well as the standard errors of all estimates, using a parametric bootstrap. We also examined the impact of departures from our assumptions of 100% specificity and conditional independence on the estimated prevalence., (Copyright © 2015 John Wiley & Sons, Ltd.)
- Published
- 2015
- Full Text
- View/download PDF
28. Choosing profile double-sampling designs for survival estimation with application to President's Emergency Plan for AIDS Relief evaluation.
- Author
-
An MW, Frangakis CE, and Yiannoutsos CT
- Subjects
- Algorithms, Humans, Kenya epidemiology, United States, Acquired Immunodeficiency Syndrome mortality, International Cooperation, Research Design, Sampling Studies, Survival Analysis
- Abstract
Most studies that follow subjects over time are challenged by having some subjects who dropout. Double sampling is a design that selects and devotes resources to intensively pursue and find a subset of these dropouts, then uses data obtained from these to adjust naïve estimates, which are potentially biased by the dropout. Existing methods to estimate survival from double sampling assume a random sample. In limited-resource settings, however, generating accurate estimates using a minimum of resources is important. We propose using double-sampling designs that oversample certain profiles of dropouts as more efficient alternatives to random designs. First, we develop a framework to estimate the survival function under these profile double-sampling designs. We then derive the precision of these designs as a function of the rule for selecting different profiles, in order to identify more efficient designs. We illustrate using data from the United States President's Emergency Plan for AIDS Relief-funded HIV care and treatment program in western Kenya. Our results show why and how more efficient designs should oversample patients with shorter dropout times. Further, our work suggests generalizable practice for more efficient double-sampling designs, which can help maximize efficiency in resource-limited settings., (Copyright © 2014 John Wiley & Sons, Ltd.)
- Published
- 2014
- Full Text
- View/download PDF
29. Prediction of Alfalfa Herbage Mass Using Sward Height, Ground Cover, and Disk Technique
- Author
-
T. C. Griggs and William C. Stringer
- Subjects
Agronomy ,Double sampling ,Calibration (statistics) ,Environmental science ,Regression analysis ,Cover (algebra) ,Agronomy and Crop Science - Published
- 1988
- Full Text
- View/download PDF
30. Total Nitrogen using a Sodium Hydroxide Index and Double Sampling Theory
- Author
-
J. Michael Geist and John W. Hazard
- Subjects
chemistry.chemical_compound ,Index (economics) ,chemistry ,Double sampling ,Sodium hydroxide ,Inorganic chemistry ,Total nitrogen ,Soil Science ,Environmental science - Published
- 1975
- Full Text
- View/download PDF
31. De wiskundige behandeling van het onderscheidingsvermogen van een steekproef schema
- Author
-
G. Quintelier
- Subjects
Statistics and Probability ,Measure (data warehouse) ,symbols.namesake ,Relation (database) ,Double sampling ,Simple (abstract algebra) ,symbols ,Applied mathematics ,Statistics, Probability and Uncertainty ,Poisson distribution ,Algorithm ,Mathematics ,Power (physics) - Abstract
Summary Mathematical treatment of the discriminating power of a sampling plan. In a foregoing paper the advantage of using the discriminating power for finding a sampling plan fitting practical requirements has been explained. In this paper the discriminating power, δ, is mathematically treated. By making use of the relation between Poisson and the X2 probabilities, and of the Wilson-Hilfer-t y approximation for the x2-distribution, simple mathematical relations can be established between d and the fundamental parameters of a sampling plan. By comparing various types of plans corresponding to the same value of 6 a measure of efficiency is obtained. Single sampling plans and the single and double sampling plans of Wharton are intercompared in this manner.
- Published
- 1950
- Full Text
- View/download PDF
32. Estimating Forage Yield by the Double‐Sampling Method 1
- Author
-
H. G. Wilm, David F. Costello, and G. E. Klipple
- Subjects
Efficiency ,Agronomy ,Clipping (photography) ,Double sampling ,Sample (material) ,Yield (finance) ,Sampling (statistics) ,Forage ,Agronomy and Crop Science ,Mathematics - Abstract
Two double-sampling methods, using line-transects and forage weight estimates, were tested to ascertain their relative efficiency in estimating the amount of forage present on experimental areas, as compared to the clipping of vegetation on sample plots. Considering field work alone, double-sampling with the line-transect method provided an increase in information of about 28% as compared with the information which could have been obtained by clipping only, during the same period of time. On the basis of time expended in both field and office, the double-sampling method provided only about 11% more information than could have been obtained by clipping alone. The use of weight estimates in double-sampling provided about 37% more information than could be obtained by straight clipping in an equivalent amount of field time. If field work and office compilation are both considered the gain in information dropped to about 14%. Under our conditions of intensive sampling both methods provided substantial economies in field time and some saving in total time expended. In other studies, however, these savings would be considerably affected by the relative amount of time consumed in field travel as compared to the time requirements of clipping and double-sampling. In large-scale extensive surveys, the clipping of all plots may prove to be at least as efficient as any short-cut method.
- Published
- 1944
- Full Text
- View/download PDF
33. Het nomografisch bepalen van de parameters van een steekproefschema
- Author
-
H. C. Hama and G. Quintelier
- Subjects
Statistics and Probability ,symbols.namesake ,Double sampling ,Sample size determination ,Sample (material) ,Statistics ,symbols ,Statistics, Probability and Uncertainty ,Nomogram ,Poisson distribution ,Mathematics ,Acceptable quality limit - Abstract
Summary Nomographic determination of the parameters of a sampling plan. For ordinary single sampling plans, and for single and double sampling plans according to Wharton we may, under Poisson conditions, determine the sample size, n, and the maximum number of defectives, c, permitted in a sample, by means of a very simple nomogram. We have to enter this nomogram by means of the, discriminating power of the plan which is defined as the ratio of the lot tolerance percent defective to the acceptable quality level.
- Published
- 1950
- Full Text
- View/download PDF
34. THE VISUAL ESTIMATION OF PASTURE AVAILABILITY USING STANDARD PASTURE CORES
- Author
-
B. A. Hamilton, K. J. Hutchinson, and R. W. McLean
- Subjects
geography ,geography.geographical_feature_category ,Management, Monitoring, Policy and Law ,Coring ,Pasture ,Regression ,Set (abstract data type) ,Agronomy ,Double sampling ,Statistics ,Visual scoring ,Visual estimation ,Agronomy and Crop Science ,Reference standards ,Mathematics - Abstract
A coring technique (6), that may be used for estimating the mean amount of herbage on closely grazed pastures, has been adapted to include visual scoring. Two methods are described, both of which use sets of pasture cores from the sward as reference standards. The observers score the herbage on view at random sites against the standards. In the first method the scores are converted to herbage yields directly by reference to the yields of the standards. In the second method a double sampling regression technique is used and the set of standards is used as a visual guide only.
- Published
- 1972
- Full Text
- View/download PDF
35. On the Theory of Classical Regression and Double Sampling Estimation
- Author
-
B. D. Tikkiwal
- Subjects
Statistics and Probability ,Estimation ,education.field_of_study ,010102 general mathematics ,Population ,01 natural sciences ,Regression ,Stratified sampling ,010104 statistics & probability ,Double sampling ,Statistics ,Econometrics ,0101 mathematics ,education ,Mathematics - Abstract
SUMMARY This paper examines the various classical results in the theory of regression and double sampling estimation and extends them to the study of a finite population.
- Published
- 1960
- Full Text
- View/download PDF
36. Een systematische vergelijking van de statistische eigenschappen van hedendaagse steekproef-schema's
- Author
-
H. C. Hamaker
- Subjects
Statistics and Probability ,Sampling scheme ,Efficiency ,Double sampling ,Coincident ,Sample size determination ,Statistics, Probability and Uncertainty ,Sequential sampling ,Random walk ,Algorithm ,Mathematics - Abstract
Summary (A systematic comparison of the statistical properties of present-day sampling schemes). A survey of various sampling schemes, carried out with the aid of the, random walk diagram, ‘leads to the conception that it must be possible to effect the same degree of inspection by the application of different schemes. The degree of inspection of a scheme is contained in its, operating characteristic’, which is specified by two constants viz: its centre q0, for which P = 1/2, and its slope s in this point, defined by s= -(dP/dq)q=q0. It is shown that, if the operating characteristics of two different sampling schemes possess the same values of q0 and s, the two characteristics are almost completely coincident. Thus two sampling schemes having the same q0 and s will give identical inspection performances, and are consequently defined as, equivalent. ‘And by comparing the average sample sizes of equivalent schemes we may compute their relative efficiency’. For each set of values q0 and s it is possible to construct a single sampling scheme possessing these characteristics. Hence to any scheme whatever there will correspond an, equivalent single sampling scheme, and, by adopting the single sampling scheme as a standard of reference, we may arrive at a general definition of efficiency. By the introduction of q0 and s the formulae relating to the sequential sampling scheme are greatly simplified, and for each set of values q0 and s there is one corresponding sequential scheme. By discussing a particular example of the double sampling scheme it is shown that the methods developed in this paper can successfully be applied to a systematic treatment of the variety of schemes now in existence.
- Published
- 1948
- Full Text
- View/download PDF
37. AN EVALUATION OF AN ELECTRONIC INSTRUMENT FOR PASTURE YIELD ESTIMATION. 2. Use with double sampling for regression estimation
- Author
-
H. L. Back, F. E. Alder, and B. G. Gibbs
- Subjects
Estimation ,geography ,geography.geographical_feature_category ,Double sampling ,Electronic instrument ,Yield (finance) ,Statistics ,Management, Monitoring, Policy and Law ,Agronomy and Crop Science ,Pasture ,Regression ,Mathematics ,Stratified sampling - Published
- 1969
- Full Text
- View/download PDF
38. On the Admissibility of the Regression Estimator
- Author
-
David R. Bellhouse and V. M. Joshi
- Subjects
Statistics and Probability ,Double sampling ,Statistics ,Regression estimator ,Mathematics - Published
- 1984
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.