53,285 results on '"Sample size determination"'
Search Results
52. Homogeneity testing for binomial proportions under stratified double-sampling scheme with two fallible classifiers.
- Author
-
Qiu, Shi-Fang and Fu, Qi-Xiang
- Subjects
- *
LIKELIHOOD ratio tests , *BINOMIAL distribution , *HOMOGENEITY , *ERROR rates , *EXPERIMENTAL design , *COMPUTER simulation , *RESEARCH , *SAMPLE size (Statistics) , *RESEARCH methodology , *REGRESSION analysis , *MEDICAL cooperation , *EVALUATION research , *COMPARATIVE studies , *STATISTICAL models , *PROBABILITY theory - Abstract
This article investigates the homogeneity testing problem of binomial proportions for stratified partially validated data obtained by double-sampling method with two fallible classifiers. Several test procedures, including the weighted-least-squares test with/without log-transformation, logit-transformation and double log-transformation, and likelihood ratio test and score test, are developed to test the homogeneity under two models, distinguished by conditional independence assumption of two classifiers. Simulation results show that score test performs better than other tests in the sense that the empirical size is generally controlled around the nominal level, and hence be recommended to practical applications. Other tests also perform well when both binomial proportions and sample sizes are not small. Approximate sample sizes based on score test, likelihood ratio test and the weighted-least-squares test with double log-transformation are generally accurate in terms of the empirical power and type I error rate with the estimated sample sizes, and hence be recommended. An example from the malaria study is illustrated by the proposed methodologies. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
53. Sample size calculation and optimal design for regression-based norming of tests and questionnaires
- Author
-
Math J. J. M. Candel, Francesco Innocenti, Gerard J. P. van Breukelen, Frans E. S. Tan, RS: CAPHRI School for Public Health and Primary Care, FHML Methodologie & Statistiek, RS: CAPHRI - R1 - Ageing and Long-Term Care, RS: CAPHRI - R6 - Promoting Health & Personalised Care, FPN Methodologie & Statistiek, and RS: FPN M&S I
- Subjects
Optimal design ,percentile rank score ,Z-score ,Regression analysis ,ESTABLISHING NORMATIVE DATA ,Standard score ,normative data ,Regression ,sample size calculation ,Percentile rank ,Sample size determination ,Statistics ,Psychological testing ,Psychology (miscellaneous) ,optimal design ,Categorical variable ,Mathematics - Abstract
To prevent mistakes in psychological assessment, the precision of test norms is important. This can be achieved by drawing a large normative sample and using regression-based norming. Based on that norming method, a procedure for sample size planning to make inference on Z-scores and percentile rank scores is proposed. Sampling variance formulas for these norm statistics are derived and used to obtain the optimal design, that is, the optimal predictor distribution, for the normative sample, thereby maximizing precision of estimation. This is done under five regression models with a quantitative and a categorical predictor, differing in whether they allow for interaction and nonlinearity. Efficient robust designs are given in case of uncertainty about the regression model. Furthermore, formulas are provided to compute the normative sample size such that individuals' positions relative to the derived norms can be assessed with prespecified power and precision. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
- Published
- 2023
54. Correcting for the multiplicative and additive effects of measurement unreliability in meta-analysis of correlations
- Author
-
Zijun Ke and Xin Tong
- Subjects
Sample size determination ,Computer science ,Meta-analysis ,Multiplicative function ,Econometrics ,Psychology (miscellaneous) ,PsycINFO ,Publication bias ,Sensitivity (control systems) ,Popularity ,Reliability (statistics) - Abstract
As a powerful tool for synthesizing information from multiple studies, meta-analysis has gained high popularity in many disciplines. Conclusions stemming from meta-analyses are often used to direct theory development, calibrate sample size planning, and guide critical decision-making and policymaking. However, meta-analyses can be conflicted, misleading, and irreproducible. One of the reasons for meta-analyses to be misleading is the improper handling of measurement unreliability. We show that even when there is no publication bias, the current meta-analysis procedures would frequently detect nonexistent effects, and provide severely biased estimates and intervals with coverage rates far below the intended level. In this study, an effective approach to correcting for unreliability is proposed and evaluated via simulation studies. Its sensitivity to the violation of the homogeneous reliability and residual correlation assumption is also tested. The proposed method is illustrated using a real meta-analysis on the relationship between extroversion and subjective well-being. Substantial differences in meta-analytic results are observed between the proposed method and existing methods. Further, although not specifically designed for aggregating effect sizes with various measures, the proposed method can be used to fulfill the purpose. The study ends with discussions on the limitations and guidelines for implementing the proposed approach. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
- Published
- 2023
55. Second-order refinements for t-ratios with many instruments
- Author
-
Yukitoshi Matsushita and Taisuke Otsu
- Subjects
Economics and Econometrics ,Heteroscedasticity ,Sample size determination ,Simultaneous equations ,Applied Mathematics ,Homoscedasticity ,Instrumental variable ,Null (mathematics) ,Applied mathematics ,Contrast (statistics) ,Estimator ,Mathematics - Abstract
This paper studies second-order properties of the many instruments robust t -ratios based on the limited information maximum likelihood and Fuller estimators for instrumental variable regression models with homoskedastic errors under the many instruments asymptotics, where the number of instruments may increase proportionally with the sample size n , and proposes second-order refinements to the t -ratios to improve the size and power properties. Based on asymptotic expansions of the null and non-null distributions of the t -ratios derived under the many instruments asymptotics, we show that the second-order terms of those expansions may have non-trivial impacts on the size as well as the power properties. Furthermore, we propose adjusted t -ratios whose approximation errors for the null rejection probabilities are of order O ( n − 1 ) in contrast to the ones for the unadjusted t -ratios of order O ( n − 1 / 2 ) , and show that these adjustments induce some desirable power properties in terms of the local maximinity. Although these results are derived under homoskedastic errors, we also establish a stochastic expansion for a heteroskedasticity robust t -ratio, and propose an analogous adjustment under slight deviations from homoskedasticity.
- Published
- 2023
56. A tutorial on assessing statistical power and determining sample size for structural equation models
- Author
-
Lisa J. Jobst, Morten Moshagen, and Martina Bader
- Subjects
Goodness of fit ,Sample size determination ,Econometrics ,Range (statistics) ,A priori and a posteriori ,Measurement invariance ,Psychology (miscellaneous) ,Statistical power ,Structural equation modeling ,Statistical hypothesis testing - Abstract
Structural equation modeling (SEM) is a widespread approach to test substantive hypotheses in psychology and other social sciences. However, most studies involving structural equation models neither report statistical power analysis as a criterion for sample size planning nor evaluate the achieved power of the performed tests. In this tutorial, we provide a step-by-step illustration of how a priori, post hoc, and compromise power analyses can be conducted for a range of different SEM applications. Using illustrative examples and the R package semPower, we demonstrate power analyses for hypotheses regarding overall model fit, global model comparisons, particular individual model parameters, and differences in multigroup contexts (such as in tests of measurement invariance). We encourage researchers to yield reliable-and thus more replicable-results based on thoughtful sample size planning, especially if small or medium-sized effects are expected. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
- Published
- 2023
57. Vocal Tract Discomfort Symptoms in Elementary and High School Teachers
- Author
-
Ali Kakavandi, Ronald C. Scherer, Mohsen Vahedi, and Mahdi Tahamtan
- Subjects
medicine.medical_specialty ,business.industry ,Large effect size ,education ,Mean age ,Audiology ,LPN and LVN ,Mean frequency ,Likert scale ,Speech and Hearing ,School teachers ,medicine.anatomical_structure ,Otorhinolaryngology ,Sample size determination ,Throat ,medicine ,business ,Vocal tract - Abstract
Summary Objectives The vocal tract discomfort scale is a self-rating seven-point Likert scale that quantifies frequency and severity of eight qualitative descriptors including burning, tight, dry, aching, tickling, sore, irritable, and lump in the throat, and ranges from 0 (never/none) to 6 (always/extreme; Mathieson et al. 2009). The objectives of the current study were to compare the vocal tract discomfort scale results between elementary school teachers and high school teachers and between male and female teachers using the Persian vocal tract discomfort scale. Also, teachers in different age ranges and with different experiences were compared regarding vocal tract discomfort symptoms. Methods The researchers chose 20 elementary and high schools by simple random sampling in Khorramabad, Iran. The survey was given to available teachers of the selected schools. Considering the inclusion criteria, required sample size, and after excluding questionnaires that were not correctly answered, 120 were selected such that 30 were chosen for each subgroup. Subjects consisted of 60 elementary school teachers (30 females and 30 males) with the mean age of 40.92 years (standard deviation = 6.07) and 60 high school teachers (30 females and 30 males) with the mean age of 40.67 years (standard deviation = 6.00). SPSS 25 was used for analyzing the data. Results Results indicated that the frequency and severity of the vocal tract discomfort in elementary school teachers were significantly higher than for the high school teachers with a medium to large effect size. Although the frequency and severity of the symptoms were higher in female compared with male teachers, those differences were not significantly different. Younger teachers had lower frequency and severity ratings of vocal tract discomfort symptoms than older teachers. Teaching experience was not an important factor in predicting vocal tract discomfort symptoms in teachers. Conclusions The results of this study suggest that there is higher frequency, greater severity, and higher percentages of vocal tract discomfort symptoms in elementary compared with high school teachers. In addition, although the mean frequency and severity of vocal tract discomfort symptoms were not significantly different between females and males, females reported higher percentages of the symptoms. Because each of the eight vocal tract symptoms was experienced at the time of testing by between 42% (tightness) and 68% (dryness) of the participants, it is suggested that an educational program regarding vocal tract discomfort may be helpful for this profession.
- Published
- 2023
58. Evaluation of sample pooling for the detection of SARS-CoV-2 in a resource-limited setting, Dominican Republic
- Author
-
Robert Paulino Ramirez, Monica Tejeda Ramírez, Jhasmel Cabrera, Elisa Contreras, Camila Del Rosario, and Alejandro Vallejo Degaudenzi
- Subjects
Microbiology (medical) ,Coronavirus disease 2019 (COVID-19) ,Computer science ,SARS-CoV-2 ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,Pooling ,COVID-19 ,Workload ,Sample (statistics) ,General Medicine ,RdRp/E genes ,Viral infection ,Article ,Sample size determination ,Statistics ,pruebas agrupadas ,Limited resources ,pool testing ,PCR en tiempo real ,genes RdRp / E ,Real-time PCR - Abstract
COVID-19 is a worldwide public health threat. Diagnosis by RT-PCR has been employed as the standard method to confirm viral infection. Sample pooling testing can optimize the resources by reducing the workload and reagents shortage, and be useful in laboratories and countries with limited resources. This study aims to evaluate SARS-CoV-2 detection by sample pooling testing in comparison with individual sample testing.We created 210 pools out of 245 samples, varying from 4 to 10 samples per pool, each containing a positive sample. We conducted detection of SARS-CoV-2-specific RdRp/E target sites.Pooling of three samples for SARS-CoV-2 detection might be an efficient strategy to perform without losing RT-PCR sensitivity.Considering the positivity rate in Dominican Republic and that larger sample pools have higher probabilities of obtaining false negative results, the optimal sample size to perform a pooling strategy shall be three samples.La COVID-19 es una amenaza de salud pública mundial. La RT-PCR es el método estándar para confirmar la infección. La estrategia de pruebas de muestras agrupadas puede reducir la carga de trabajo y la escasez de reactivos, y ser útil en países con escasos recursos. Evaluamos la detección del SARS-CoV-2 mediante esta estrategia en comparación con pruebas individuales.Creamos 210 grupos de 245 muestras, de 4 a 10 muestras por grupo, cada uno con una muestra positiva. Realizamos extracción de ARN y qRT-PCR para detectar la presencia de la diana RdRp/E.La combinación de hasta 3 muestras para la detección del SARS-CoV-2 podría ser una estrategia eficaz sin perder la sensibilidad.Considerando la tasa de positividad en República Dominicana y que los grupos con más muestras tienen mayor probabilidad de obtener resultados falsos negativos, el tamaño óptimo para realizar esta estrategia es de 3 muestras.
- Published
- 2023
59. Personality traits associated with Alcohol Dependence Syndrome and its relapse
- Author
-
VS Chauhan, Sunaina Sood, Prateek Yadav, and Rajeev Saini
- Subjects
0301 basic medicine ,Alcohol Use Disorders Identification Test ,business.industry ,media_common.quotation_subject ,030106 microbiology ,Conscientiousness ,General Medicine ,Alcohol use disorder ,medicine.disease ,Neuroticism ,03 medical and health sciences ,0302 clinical medicine ,Sample size determination ,Openness to experience ,Medicine ,Personality ,030212 general & internal medicine ,Big Five personality traits ,business ,Clinical psychology ,media_common - Abstract
Background Course of Alcohol Dependence Syndrome (ADS) is studded with multiple relapses. Personality factors are implicated as one of the influencing factors in the course of this disorder. Keeping in view of scarcity of Indian data available, the study was planned with the aim to find Personality traits more commonly associated with ADS patients, and identify specific traits, associated with relapses of ADS. Method With sample size of 100 consecutive cases and 100 controls, socio-demographic data was collected. Alcohol Use Disorder Identification Test (AUDIT) and Severity of Alcohol Use Disorder Test (SAD-Q) were administered to each of these patient. Personality dimensions were assessed with NEO-five factor inventory (NEO-FFI) (Costa and McCrae), for both groups and further compared for differences in the dimensions in each of its subscale. Results The NEO scores, showed statistically significant difference with Cases having higher scores in Neuroticism and control group in Openness and Conscientiousness. Neuroticism linked to higher scores of AUDIT and SADQ and also associated with relapses. Other traits also showed statistically significant association which are discussed. Conclusion As new factors are being explored for effective management, routine personality profiling is easily accomplished and can give delightful insight into focused and designed management plan.
- Published
- 2023
60. Pre- and Postoperative High-Speed Videolaryngoscopy Findings in Adductor Spasmodic Dysphonia Following Transoral CO2 LASER-Guided Thyroarytenoid Myoneurectomy
- Author
-
Dushyanth Ganesuni, Shraddha Jayant Saindani, Asheesh Dora Ghanpur, Sachin Gandhi, and Subash Bhatta
- Subjects
Cord ,Co2 laser ,business.industry ,Small sample ,Retrospective cohort study ,LPN and LVN ,Adductor spasmodic dysphonia ,030507 speech-language pathology & audiology ,03 medical and health sciences ,Speech and Hearing ,0302 clinical medicine ,Otorhinolaryngology ,Paired samples ,Sample size determination ,Anesthesia ,Medicine ,030223 otorhinolaryngology ,0305 other medical science ,Prospective cohort study ,business - Abstract
Summary Introduction Vocal cord vibration after transoral CO2 LASER-guided thyroarytenoid (TA) myoneurectomy in adductor spasmodic dysphonia (AdSD) patients is unclear to date. The precise vibratory patterns in AdSD patients are difficult to evaluate with routine videolaryngostroboscopy. High-speed videolaryngoscopy (HSV) is an ideal choice to evaluate such patients. This study was performed to compare pre- and postoperative, after 6 months, vocal fold vibratory onset delay (VFVOD) and closed phase glottal cycle (CPGC) in AdSD patients following transoral CO2 LASER-guided TA myoneurectomy using the HSV. Materials and methods Retrospective study, conducted from January, 2016 to January, 2019, of the AdSD patients who underwent transoral CO2 LASER-guided TA myoneurectomy using the HSV. Patient data were acquired from the hospital database to evaluate VFVOD and CPGC from HSV recordings of the patients. VFVOD was calculated as sum of prephonatory delay (PPD) and steady-state delay (SSD). The PPD and SSD were evaluated and compared separately for each patient. The MedCal Version 19.2.6 was used for data analysis. Paired sample t test was performed to compute the significance of the difference between the mean of the dataset. A P value less than 0.05 was considered significant. Results A total of nine patients were included in the study, out of which three were females and six were males. The average age was 45.5 ± 6.9 years. The mean of postoperative PPD (166.8 ± 22.1), SSD (76.5 ± 8.6), and CPGC (62.6 ± 4.8) were significantly less than mean of preoperative PPD (222.6 ± 22.1), SSD (97.7 ± 9.5), and CPGC (71.6 ± 5 %), with P values of 0.0007, 0.0001, and 0.0001, respectively. Conclusions There was a significant decrease in VFVOD and CPGC posttransoral CO2 LASER-guided TA myoneurectomy in AdSD patients after 6 months follow-up. This study also establishes efficiency of the HSV to measure the vocal cord vibration in the patients with AdSD. The primary limitations of the study were the small sample size and its retrospective nature. Future prospective studies with increased sample size can further substantiate the findings of the work performed here.
- Published
- 2023
61. Efficient closed-form estimation of large spatial autoregressions
- Author
-
Abhimanyu Gupta
- Subjects
FOS: Computer and information sciences ,91B72, 62P20 ,Economics and Econometrics ,Applied Mathematics ,Econometrics (econ.EM) ,Function (mathematics) ,Parameter space ,Newton's method in optimization ,Least squares ,Methodology (stat.ME) ,FOS: Economics and business ,Compact space ,Autoregressive model ,Sample size determination ,Applied mathematics ,Statistics - Methodology ,Economics - Econometrics ,Mathematics ,Central limit theorem - Abstract
Newton-step approximations to pseudo maximum likelihood estimates of spatial autoregressive models with a large number of parameters are examined, in the sense that the parameter space grows slowly as a function of sample size. These have the same asymptotic efficiency properties as maximum likelihood under Gaussianity but are of closed form. Hence they are computationally simple and free from compactness assumptions, thereby avoiding two notorious pitfalls of implicitly defined estimates of large spatial autoregressions. For an initial least squares estimate, the Newton step can also lead to weaker regularity conditions for a central limit theorem than those extant in the literature. A simulation study demonstrates excellent finite sample gains from Newton iterations, especially in large multiparameter models for which grid search is costly. A small empirical illustration shows improvements in estimation precision with real data., 36 pages
- Published
- 2023
62. Predictability of the Physical Shipping Market by Freight Derivatives
- Author
-
Emrah Gulay, Korkut Bekiroglu, and Okan Duru
- Subjects
Mathematical optimization ,Econometric model ,Goodness of fit ,Computer science ,Sample size determination ,Search algorithm ,Strategy and Management ,Hyperparameter optimization ,Contrast (statistics) ,Sample (statistics) ,Electrical and Electronic Engineering ,Predictability - Abstract
This article investigates the predictability of dry bulk shipments' physical shipping costs while testing the predictive significance of derivative products. Accordingly, a comprehensive grid search procedure is needed to simulate combinations of model structures subject to a cross-validation process. In this regard, the intelligent model search engine (IMSE) is implemented as the search algorithm. The IMSE algorithm is selected due to its broad model space naturally consisting most of the traditional econometric model structures and advanced features, such as sample size optimization (also detects structural breaks), lag structure optimization (also reflects seasonality), among others. In contrast to the previous studies in the shipping market research, IMSE executes a relaxation of statistical significance criteria to target the out of sample predictive accuracy (instead of the goodness of fit). Sparsification is retained by utilizing a dropout procedure in line with the cross-validation process. The empirical results indicate the fact that predictive accuracy can be achieved or improved with a shorter sample period. More than half of the entire dataset has not been utilized in the final selections for most simulations. One-month and two-month maturity derivative contracts are tested for predictive features, while none of them is selected for the sparse model selections.
- Published
- 2023
63. Formal Analysis and Estimation of Chance in Datasets Based on Their Properties
- Author
-
Petr Knoth, Abdel Aziz Taha, Mihai Lupu, Luca Papariello, and Bampoulidis Alexandros
- Subjects
Estimation ,Generalization ,Process (engineering) ,Computer science ,business.industry ,Small number ,Estimator ,02 engineering and technology ,Machine learning ,computer.software_genre ,Class (biology) ,Computer Science Applications ,Computational Theory and Mathematics ,Sample size determination ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Artificial intelligence ,business ,computer ,Predictive modelling ,Information Systems - Abstract
Machine learning research, particularly in genomics, is often based on wide shaped datasets, i.e. datasets having a large number of features, but a small number of samples. Such configurations raise the possibility of chance influence (the increase of measured accuracy due to chance correlations) on the learning process and the evaluation results. Prior research underlined the problem of generalization of models obtained based on such data. In this paper, we investigate the influence of chance on prediction and show its significant effects on wide shaped datasets. First, we empirically demonstrate how significant the influence of chance in such datasets is by showing that prediction models trained on thousands of randomly generated datasets can achieve high accuracy. This is the case even when using cross-validation. We then provide a formal analysis of chance influence and design formal chance influence estimators based on the dataset parameters, namely its sample size, the number of features, the number of classes and the class distribution. Finally, we provide an in-depth discussion of the formal analysis including applications of the findings and recommendations on chance influence mitigation.
- Published
- 2022
64. Optimality of Matched-Pair Designs in Randomized Controlled Trials
- Author
-
Yuehao Bai
- Subjects
FOS: Computer and information sciences ,History ,Economics and Econometrics ,Polymers and Plastics ,Average treatment effect ,Econometrics (econ.EM) ,Mathematics - Statistics Theory ,Statistics Theory (math.ST) ,Industrial and Manufacturing Engineering ,law.invention ,Methodology (stat.ME) ,FOS: Economics and business ,Randomized controlled trial ,law ,Statistics ,FOS: Mathematics ,Fraction (mathematics) ,Business and International Management ,Statistics - Methodology ,Economics - Econometrics ,Mathematics ,Estimator ,Nominal level ,Standard error ,Sample size determination ,Null hypothesis - Abstract
This paper studies the optimality of matched-pair designs in randomized controlled trials (RCTs). Matched-pair designs are examples of stratified randomization, in which the researcher partitions a set of units into strata based on their observed covariates and assign a fraction of units in each stratum to treatment. A matched-pair design is such a procedure with two units per stratum. Despite the prevalence of stratified randomization in RCTs, implementations differ vastly. We provide an econometric framework in which, among all stratified randomization procedures, the optimal one in terms of the mean-squared error of the difference-in-means estimator is a matched-pair design that orders units according to a scalar function of their covariates and matches adjacent units. Our framework captures a leading motivation for stratifying in the sense that it shows that the proposed matched-pair design additionally minimizes the magnitude of the ex-post bias, i.e., the bias of the estimator conditional on realized treatment status. We then consider empirical counterparts to the optimal stratification using data from pilot experiments and provide two different procedures depending on whether the sample size of the pilot is large or small. For each procedure, we develop methods for testing the null hypothesis that the average treatment effect equals a prespecified value. Each test we provide is asymptotically exact in the sense that the limiting rejection probability under the null equals the nominal level. We run an experiment on the Amazon Mechanical Turk using one of the proposed procedures, replicating one of the treatment arms in Dellavigna and Pope (2018), and find the standard error decreases by 29%, so that only half of the sample size is required to attain the same standard error.
- Published
- 2022
65. Sample size determination for mediation analysis of longitudinal data
- Author
-
Haitao Pan, Suyu Liu, Danmin Miao, and Ying Yuan
- Subjects
Sample size determination ,Mediation analysis ,Longitudinal study ,Medicine (General) ,R5-920 - Abstract
Abstract Background Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. Methods To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel’s method, distribution of product method and the bootstrap method. Results Among the three methods of testing the mediation effects, Sobel’s method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel’s method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Conclusions Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel’s method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
- Published
- 2018
- Full Text
- View/download PDF
66. Determining Plasma Protein Variation Parameters as a Prerequisite for Biomarker Studies—A TMT-Based LC-MSMS Proteome Investigation
- Author
-
Lou-Ann C. Andersen, Nicolai Bjødstrup Palstrøm, Axel Diederichsen, Jes Sanddal Lindholt, Lars Melholt Rasmussen, and Hans Christian Beck
- Subjects
inter-individual biological variation ,plasma proteins ,plasma proteomics ,power analysis ,sample size determination ,Microbiology ,QR1-502 - Abstract
Specific plasma proteins serve as valuable markers for various diseases and are in many cases routinely measured in clinical laboratories by fully automated systems. For safe diagnostics and monitoring using these markers, it is important to ensure an analytical quality in line with clinical needs. For this purpose, information on the analytical and the biological variation of the measured plasma protein, also in the context of the discovery and validation of novel, disease protein biomarkers, is important, particularly in relation to for sample size calculations in clinical studies. Nevertheless, information on the biological variation of the majority of medium-to-high abundant plasma proteins is largely absent. In this study, we hypothesized that it is possible to generate data on inter-individual biological variation in combination with analytical variation of several hundred abundant plasma proteins, by applying LC-MS/MS in combination with relative quantification using isobaric tagging (10-plex TMT-labeling) to plasma samples. Using this analytical proteomic approach, we analyzed 42 plasma samples prepared in doublets, and estimated the technical, inter-individual biological, and total variation of 265 of the most abundant proteins present in human plasma thereby creating the prerequisites for power analysis and sample size determination in future clinical proteomics studies. Our results demonstrated that only five samples per group may provide sufficient statistical power for most of the analyzed proteins if relative changes in abundances >1.5-fold are expected. Seventeen of the measured proteins are present in the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Biological Variation Database, and demonstrated remarkably similar biological CV’s to the corresponding CV’s listed in the EFLM database suggesting that the generated proteomic determined variation knowledge is useful for large-scale determination of plasma protein variations.
- Published
- 2021
- Full Text
- View/download PDF
67. Approximations of the power functions for Wald, likelihood ratio, and score tests and their applications to linear and logistic regressions.
- Author
-
Demidenko, Eugene and Penikas, Henry
- Subjects
LOGISTIC regression analysis ,TEST scoring ,LIKELIHOOD ratio tests ,NULL hypothesis ,REGRESSION analysis ,STATISTICAL power analysis - Abstract
Traditionally, asymptotic tests are studied and applied under local alternative. There exists a widespread opinion that the Wald, likelihood ratio, and score tests are asymptotically equivalent. We dispel this myth by showing that These tests have different statistical power in the presence of nuisance parameters. The local properties of the tests are described in terms of the first and second derivative evaluated at the null hypothesis. The comparison of the tests are illustrated with two popular regression models: linear regression with random predictor and logistic regression with binary covariate. We study the aberrant behavior of the tests when the distance between the null and alternative does not vanish with the sample size. We demonstrate that these tests have different asymptotic power. In particular, the score test is generally asymptotically biased but slightly superior for linear regression in a close neighborhood of the null. The power approximations are confirmed through simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
68. Incorporating historical two‐arm data in clinical trials with binary outcome: A practical approach.
- Author
-
Feißt, Manuel, Krisam, Johannes, and Kieser, Meinhard
- Subjects
- *
FALSE positive error , *CLINICAL trials , *ERROR rates - Abstract
SUMMARY: The feasibility of a new clinical trial may be increased by incorporating historical data of previous trials. In the particular case where only data from a single historical trial are available, there exists no clear recommendation in the literature regarding the most favorable approach. A main problem of the incorporation of historical data is the possible inflation of the type I error rate. A way to control this type of error is the so‐called power prior approach. This Bayesian method does not "borrow" the full historical information but uses a parameter 0 ≤ δ ≤ 1 to determine the amount of borrowed data. Based on the methodology of the power prior, we propose a frequentist framework that allows incorporation of historical data from both arms of two‐armed trials with binary outcome, while simultaneously controlling the type I error rate. It is shown that for any specific trial scenario a value δ > 0 can be determined such that the type I error rate falls below the prespecified significance level. The magnitude of this value of δ depends on the characteristics of the data observed in the historical trial. Conditionally on these characteristics, an increase in power as compared to a trial without borrowing may result. Similarly, we propose methods how the required sample size can be reduced. The results are discussed and compared to those obtained in a Bayesian framework. Application is illustrated by a clinical trial example. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
69. Comparison of disease prevalence in two populations under double-sampling scheme with two fallible classifiers.
- Author
-
Qiu, Shi-Fang, He, Jie, Tao, Ji-Ran, Tang, Man-Lai, and Poon, Wai-Yin
- Subjects
- *
DISEASE prevalence , *NULL hypothesis , *FISHER exact test , *TEST scoring - Abstract
A disease prevalence can be estimated by classifying subjects according to whether they have the disease. When gold-standard tests are too expensive to be applied to all subjects, partially validated data can be obtained by double-sampling in which all individuals are classified by a fallible classifier, and some of individuals are validated by the gold-standard classifier. However, it could happen in practice that such infallible classifier does not available. In this article, we consider two models in which both classifiers are fallible and propose four asymptotic test procedures for comparing disease prevalence in two groups. Corresponding sample size formulae and validated ratio given the total sample sizes are also derived and evaluated. Simulation results show that (i) Score test performs well and the corresponding sample size formula is also accurate in terms of the empirical power and size in two models; (ii) the Wald test based on the variance estimator with parameters estimated under the null hypothesis outperforms the others even under small sample sizes in Model II, and the sample size estimated by this test is also accurate; (iii) the estimated validated ratios based on all tests are accurate. The malarial data are used to illustrate the proposed methodologies. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
70. Minimum risk point estimation (MRPE) of the mean in an exponential distribution under powered absolute error loss (PAEL) due to estimation plus cost of sampling.
- Author
-
Mukhopadhyay, Nitis and Khariton, Yakov
- Subjects
- *
FIX-point estimation , *ASYMPTOTIC efficiencies , *LOSS functions (Statistics) , *COST estimates , *DATA analysis , *STOCHASTIC dominance - Abstract
We begin with a review of asymptotic properties of a purely sequential minimum risk point estimation (MRPE) methodology for an unknown mean in a one-parameter exponential distribution under a class of generalized loss functions. This class of powered absolute error loss (PAEL) includes both squared error loss (SEL) and absolute error loss (AEL) plus cost of sampling. We prove the asymptotic second-order efficiency property and asymptotic first-order risk efficiency property associated with the purely sequential MRPE problem. For operational convenience, we then move to implement an accelerated sequential MRPE methodology and prove the analogous asymptotic second-order efficiency property and asymptotic first-order risk efficiency property. We follow up with extensive data analysis from simulations and provide illustrations using cancer data. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
71. Characteristics associated with Publication of Randomized Controlled Trials in the Journal of Cardiothoracic and Vascular Anesthesia: A 15-Year Analysis, 2004-2018.
- Author
-
Pagel, Paul S., Lazicki, Timothy J., Izquierdo, David A., Boettcher, Brent T., Tawil, Justin N., and Freed, Julie K.
- Abstract
Randomized controlled trials (RCTs) provide important data to guide clinical decisions. Publication bias may limit the applicability of RCTs because many clinical investigators prefer to submit and journals more selectively accept studies with positive results. The authors tested the hypothesis that positive RCTs published in the Journal of Cardiothoracic and Vascular Anesthesia were more likely to be associated with factors known to predict publication of positive versus negative RCTs in other journals. This observational study was an internet analysis of all issues of Journal of Cardiothoracic and Vascular Anesthesia from 2004-2018. Each issue was searched to identify human RCTs. The numbers of centers and enrolled patients in each RCT were tabulated. The corresponding author determined the country of origin (United States v international). A trial was "positive" or "negative" based on rejection or confirmation of the null hypothesis, respectively, for the primary outcome variable or the majority of measured outcomes if a primary outcome was not identified. The presence or absence of a hypothesis, randomization methodology, sample size calculation, and blinded research design was recorded. Registration in a public database, Consolidated Statements of Reporting Trials (CONSORT) guideline compliance, and the source of funding also were determined. The number of citations for each RCT was determined by using Google Scholar; the citation rate was calculated as the ratio of the number of total citations and the duration in years since the trial's original publication. A total of 296 RCTs were identified, of which 58.8% reported positive results. Most RCTs were single center, relatively small, and international in origin. Total citations/RCT decreased over time, but citations/year did not. The percentage of RCTs that identified a randomization method, were registered, or followed CONSORT guidelines increased in a time-dependent manner. No differences in any factors associated with publication of RCTs were observed when positive and negative trials were compared. The Journal of Cardiothoracic and Vascular Anesthesia publishes more positive than negative RCTs, but factors that have been previously associated with RCT publication in other journals were similar between groups. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
72. An alternate approach for sample size determination in a multi-regional trial.
- Author
-
Ko, Feng-shou
- Abstract
Now we develop a tolerance interval considering multiple components in the variance from the measurements to deal with the environment or ethnic factor in the multi-regional trial. The idea is to construct a tolerance interval for a multi-regional trial under a sampling plan. Assuming that the environmental or ethnic effect exists, a sampling plan is derived to ensure that there is a desired probability for the tolerance interval. In this paper, we address the issue that the environmental or ethnic effect exists among regions to design a multi-regional trial. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
73. Optimal designs for regional bridging studies using the Bayesian power prior method.
- Author
-
Nagase, Mario, Ueda, Shinya, Higashimori, Mitsuo, Ichikawa, Katsuomi, Dunyak, James, and Al‐Huniti, Nidal
- Subjects
- *
TYPE 2 diabetes , *GLOBAL studies , *RATE of return - Abstract
As described in the ICH E5 guidelines, a bridging study is an additional study executed in a new geographical region or subpopulation to link or "build a bridge" from global clinical trial outcomes to the new region. The regulatory and scientific goals of a bridging study is to evaluate potential subpopulation differences while minimizing duplication of studies and meeting unmet medical needs expeditiously. Use of historical data (borrowing) from global studies is an attractive approach to meet these conflicting goals. Here, we propose a practical and relevant approach to guide the optimal borrowing rate (percent of subjects in earlier studies) and the number of subjects in the new regional bridging study. We address the limitations in global/regional exchangeability through use of a Bayesian power prior method and then optimize bridging study design with a return on investment viewpoint. The method is demonstrated using clinical data from global and Japanese trials in dapagliflozin for type 2 diabetes. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
74. Power analysis for cluster randomized trials with continuous co-primary endpoints
- Author
-
Yang, Siyun, Moerbeek, Mirjam, Taljaard, Monica, Li, Fan, Yang, Siyun, Moerbeek, Mirjam, Taljaard, Monica, and Li, Fan
- Abstract
Pragmatic trials evaluating health care interventions often adopt cluster randomization due to scientific or logistical considerations. Systematic reviews have shown that coprimary endpoints are not uncommon in pragmatic trials but are seldom recognized in sample size or power calculations. While methods for power analysis based on K ((Figure presented.)) binary coprimary endpoints are available for cluster randomized trials (CRTs), to our knowledge, methods for continuous coprimary endpoints are not yet available. Assuming a multivariate linear mixed model (MLMM) that accounts for multiple types of intraclass correlation coefficients among the observations in each cluster, we derive the closed-form joint distribution of K treatment effect estimators to facilitate sample size and power determination with different types of null hypotheses under equal cluster sizes. We characterize the relationship between the power of each test and different types of correlation parameters. We further relax the equal cluster size assumption and approximate the joint distribution of the K treatment effect estimators through the mean and coefficient of variation of cluster sizes. Our simulation studies with a finite number of clusters indicate that the predicted power by our method agrees well with the empirical power, when the parameters in the MLMM are estimated via the expectation-maximization algorithm. An application to a real CRT is presented to illustrate the proposed method.
- Published
- 2023
75. Vitamin D Levels and Risk of Juvenile Idiopathic Arthritis: A Mendelian Randomization Study
- Author
-
Sarah L N Clarke, Ruth E. Mitchell, Caroline L Relton, Athimalaipet V Ramanan, and Gemma C Sharp
- Subjects
Oncology ,medicine.medical_specialty ,education.field_of_study ,25-(OH)D ,business.industry ,Incidence (epidemiology) ,Confounding ,Population ,Genome-wide association study ,Juvenile idiopathic arthritis ,Rheumatology ,Mendelian Randomization ,Sample size determination ,Internal medicine ,Mendelian randomization ,medicine ,Vitamin D and neurology ,Observational study ,Vitamin D ,business ,education - Abstract
OBJECTIVES: Observational studies report mixed findings regarding the association between vitamin D and JIA incidence or activity, however such studies are susceptible to considerable bias. Since low vitamin D levels are common within the general population and easily corrected, there is potential public health benefit in identifying a causal association between vitamin D insufficiency and JIA incidence. To limit bias due to confounding and reverse causation we examined the causal effect of the major circulating form of vitamin D, 25-(OH)D, on JIA incidence using Mendelian randomization (MR).METHODS: In this two sample MR analysis we used summary level data from the largest and most recent genome wide association study (GWAS) of 25-(OH)D levels (sample size 443,734), alongside summary data from two JIA GWASs (sample sizes 15,872 and 12,501), all from European populations. To test and account for potential bias due to pleiotropy we employed multiple MR methods and sensitivity analyses.RESULTS: We found no evidence of a causal relationship between genetically predicted 25-(OH)D levels and JIA incidence (OR 1.00, 95% CI 0.76-1.33 per standard deviation increase in standardised natural-log transformed 25-(OH)D levels). This estimate was consistent across all methods tested. Additonally there was no evidence that genetically predicted JIA causally influences 25-(OH)D levels (-0.002 standard deviation change in standardised natural-log transformed 25-(OH)D levels per doubling odds in genetically predicted JIA, 95% CI -0.006-0.002).CONCLUSION: Given the lack of a causal relationship between 25-(OH)D levels and JIA, population level vitamin D supplementation is unlikely to reduce JIA incidence.
- Published
- 2022
76. A Systematic Review and Meta-Analysis of the Safety of Hydroxychloroquine in a Randomized Controlled Trial and Observational Studies
- Author
-
Nithya J Gogtay, Miteshkumar Rajaram Maurya, Mahanjit Konwar, Debdipta Bose, and U M Thatte
- Subjects
medicine.medical_specialty ,business.industry ,Hydroxychloroquine ,Placebo ,Lower risk ,law.invention ,Randomized controlled trial ,Sample size determination ,law ,Internal medicine ,Meta-analysis ,Medicine ,Pharmacology (medical) ,Observational study ,General Pharmacology, Toxicology and Pharmaceutics ,business ,Adverse effect ,medicine.drug - Abstract
Introduction: Hydroxychloroquine (HCQ) has recently become the focus of attention in the current COVID-19 pandemic. With an increase in the off-label use of HCQ, concern for the safety of HCQ has been raised. We, therefore, performed this systematic review to analyze the safety data of HCQ against placebo and active treatment in various disease conditions. Methods: We searched PubMed, Embase, and Cochrane for Randomized Controlled Trials (RCTs) and Observational Studies (OSs) that evaluated HCQ for the treatment of any disease other than COVID19 in adult patients up to May 2020. We assessed the quality of the included studies using Risk of Bias 2 (for RCTs) and Newcastle–Ottawa Scale (for OSs). Data were analyzed with randomeffect meta-analysis. Sensitivity and subgroup analyses were performed to identify heterogeneity. Results: A total of 6641 studies were screened, and 49 studies (40 RCTs and 9 OSs) with a total sample size of 35044 patients were included. The use of HCQ was associated with higher risks of TDAEs as compared to placebo/no active treatment [RR 1.47, 95%CI 1.03-2.08]. When HCQ was compared with active treatments, the risks of AEs [RR 0.74, 95% CI 0.63-0.86] and TDAEs were less in the HCQ arm [RR 0.57, 95% CI 0.39-0.81]. The outcomes did not differ in the sensitivity analysis. Conclusion: The results suggest that the use of HCQ was associated with a lower risk of AEs and TDAEs as compared to active treatment, whereas posing higher risk of TDAEs as compared to placebo.
- Published
- 2022
77. A Novel Discriminative Dictionary Pair Learning Constrained by Ordinal Locality for Mixed Frequency Data Classification
- Author
-
Hong Yu, Guoyin Wang, Qian Yang, and Yongfang Xie
- Subjects
business.industry ,Computer science ,Locality ,Data classification ,Pattern recognition ,Computer Science Applications ,Term (time) ,Constraint (information theory) ,Computational Theory and Mathematics ,Discriminative model ,Sample size determination ,Norm (mathematics) ,Artificial intelligence ,business ,Information Systems - Abstract
A dilemma faced by classification is that the data is not collected at the same frequency in some applications. We investigate the mixed frequency data in a new way and recognize them as a special style of multi-view data, in which each view data is collected at a different sampling frequency. This paper proposes a discriminative dictionary pair learning method constrained by ordinal locality for mixed frequency data classification (shorted by DPLOL-MF). This method integrates synthesis dictionary and analysis dictionary into a dictionary pair, which not only improves computational cost caused by the ${\ell_0}$ or ${\ell_1}$ -norm constraint, but also can deal with the sampling frequency inconsistency. The DPLOL-MF utilizes a synthesis dictionary to learn class-specified reconstruction information and employs an analysis dictionary to generate coding coefficients by analyzing samples. Particularly, the ordinal locality preserving term is leveraged to constrain the atoms of dictionaries pair to further facilitate the learned dictionary pair to be more discriminative. Besides, we design a specific classification scheme for the inconsistent sample size of mixed frequency data. This paper illustrates a novel idea to solve the classification task of mixed frequency data and the experimental results demonstrate the effectiveness of the proposed method.
- Published
- 2022
78. Prehabilitation in hepato-pancreato-biliary surgery: A systematic review and meta-analysis. A necessary step forward evidence-based sample size calculation for future trials
- Author
-
Giuliana Amaddeo, Daniele Sommacale, Alexis Laurent, R. Brustia, Eric Levesque, Arié Attias, Nicolas Mongardon, C. Dagorno, V. Leroy, Rami Rhaiem, and Olivier Langeron
- Subjects
medicine.medical_specialty ,Evidence-based practice ,business.industry ,Prehabilitation ,Preoperative Exercise ,General Medicine ,Length of Stay ,Colorectal surgery ,law.invention ,Surgery ,Postoperative Complications ,Systematic review ,Randomized controlled trial ,Sample size determination ,law ,Sample Size ,Meta-analysis ,Preoperative Care ,Propensity score matching ,medicine ,Humans ,business ,Digestive System Surgical Procedures ,Randomized Controlled Trials as Topic - Abstract
Summary Introduction Prehabilitation is defined as preoperative conditioning of patients in order to improve post-operative outcomes. Some studies showed an increase in functional recovery following colorectal surgery, but its effect in hepato-pancreato-biliary (HPB) surgery is unclear. The aim of this study was to realize a systematic literature review and meta-analysis on the current available evidence on prehabilitation in HPB surgery. Materials and methods A systematic review and a metanalysis were carried out on prehabilitation (physical, nutritional and psychological interventions) in HPB surgery (2009-2019). Assessed outcomes were postoperative complications, length of stay (LOS), 30-day readmission, and mortality. Main results Four studies among the 191 screened were included in this systematic review (3 randomized controlled trials, 1 case-control propensity score study), involving 419 patients (prehabilitation group, n = 139; control group, n = 280). After pooling, no difference was observed on LOS ((−4.37 days [95% CI: −8.86; 0.13]) or postoperative complications (RR 0.83 [95%CI: 0.62; 1.10]), reported by all the included studies. Two trials reported on readmission rate, but given the high heterogeneity, a meta-analysis was not realized. No deaths were reported among the included studies. Conclusion No effect of prehabilitation programs in HPB surgery was observed on LOS or postoperative complications rate. Future trials with standardized outcomes of measure, and adequately powered samples calculations are thus required. PROSPERO registration CRD42020165218.
- Published
- 2022
79. GMM quantile regression
- Author
-
Sergio Firpo, Alexandre Poirier, Cristine Campos de Xavier Pinto, Antonio F. Galvao, and Graciela Sanroman
- Subjects
Statistics::Theory ,Economics and Econometrics ,Applied Mathematics ,Monte Carlo method ,Estimator ,Conditional probability distribution ,Quantile regression ,Sample size determination ,Econometrics ,Statistics::Methodology ,Generalized method of moments ,Parametric statistics ,Mathematics ,Quantile - Abstract
This paper develops generalized method of moments (GMM) estimation and inference procedures for quantile regression models. We propose a GMM estimator for simultaneous estimation across multiple quantiles. This estimator allows us to model quantile regression coefficients using flexible parametric restrictions across quantiles. The restrictions and simultaneous estimation lead to efficiency gains compared to standard methods. We establish the asymptotic properties of the GMM estimators when the number of quantiles used is fixed and when it diverges to infinity jointly with the sample size. As an alternative to GMM, we also propose a minimum distance estimator over a given subset of quantiles. Moreover, we provide specification tests for the imposed restrictions. The estimators and tests we propose are simple to implement in practice. Monte Carlo simulations provide numerical evidence of the finite sample properties of the methods. Finally, we apply the proposed methods to estimate the effects of smoking on birthweight of live infants at the extreme bottom of the conditional distribution.
- Published
- 2022
80. Sampling properties of the Bayesian posterior mean with an application to WALS estimation
- Author
-
Giuseppe De Luca, Jan R. Magnus, Franco Peracchi, Econometrics and Data Science, Giuseppe De Luca, Jan R Magnu, and Franco Peracchi
- Subjects
Economics and Econometrics ,WALS ,SDG 16 - Peace ,Settore SECS-P/05 ,Monte Carlo method ,Bayesian probability ,Posterior probability ,Settore SECS-P/05 - Econometria ,Double-shrinkage estimators ,01 natural sciences ,Least squares ,010104 statistics & probability ,Frequentist inference ,0502 economics and business ,Statistics ,Posterior moments and cumulants ,Statistics::Methodology ,0101 mathematics ,double-shrinkage estimator ,050205 econometrics ,Mathematics ,Location model ,Applied Mathematics ,05 social sciences ,SDG 16 - Peace, Justice and Strong Institutions ,Univariate ,Sampling (statistics) ,Estimator ,Variance (accounting) ,Justice and Strong Institutions ,Sample size determination ,posterior moments and cumulant ,Normal location model - Abstract
Many statistical and econometric learning methods rely on Bayesian ideas, often applied or reinterpreted in a frequentist setting. Two leading examples are shrinkage estimators and model averaging estimators, such as weighted-average least squares (WALS). In many instances, the accuracy of these learning methods in repeated samples is assessed using the variance of the posterior distribution of the parameters of interest given the data. This may be permissible when the sample size is large because, under the conditions of the Bernstein--von Mises theorem, the posterior variance agrees asymptotically with the frequentist variance. In finite samples, however, things are less clear. In this paper we explore this issue by first considering the frequentist properties (bias and variance) of the posterior mean in the important case of the normal location model, which consists of a single observation on a univariate Gaussian distribution with unknown mean and known variance. Based on these results, we derive new estimators of the frequentist bias and variance of the WALS estimator in finite samples. We then study the finite-sample performance of the proposed estimators by a Monte Carlo experiment with design derived from a real data application about the effect of abortion on crime rates.
- Published
- 2022
81. Rates of convergence in the two-island and isolation-with-migration models
- Author
-
Brandon Legried and Jonathan Terhorst
- Subjects
Models, Genetic ,Concentration of measure ,Inference ,Upper and lower bounds ,Biological Evolution ,Coalescent theory ,Genetics, Population ,Sample size determination ,Statistics ,Pairwise comparison ,Limit (mathematics) ,Ecology, Evolution, Behavior and Systematics ,Mathematics ,Complement (set theory) - Abstract
A number of powerful demographic inference methods have been developed in recent years, with the goal of fitting rich evolutionary models to genetic data obtained from many populations. In this paper we investigate the statistical performance of these methods in the specific case where there is continuous migration between populations. Compared with earlier work, migration significantly complicates the theoretical analysis and requires new techniques. We employ the theories of phase-type distributions and concentration of measure in order to study the two-island and isolation-with-migration models, resulting in both upper and lower bounds on rates of convergence for parametric estimators in migration models. For the upper bounds, we consider inferring rates of coalescent and migration on the basis of directly observing pairwise coalescent times, and, more realistically, when (conditionally) Poisson-distributed mutations dropped on latent trees are observed. We complement these upper bounds with information-theoretic lower bounds which establish a limit, in terms of sample size, below which inference is effectively impossible.
- Published
- 2022
82. Variability of the Surface Area of the V1, V2, and V3 Maps in a Large Sample of Human Observers
- Author
-
Noah C. Benson, Jonathan Winawer, Jennifer M. D. Yoon, Dylan Forenzo, Stephen A. Engel, and Kendrick Kay
- Subjects
Male ,Population ,Biology ,Young Adult ,Similarity (network science) ,Cortex (anatomy) ,Cortical magnification ,Primary Visual Cortex ,medicine ,Humans ,Visual Pathways ,education ,Research Articles ,Visual Cortex ,education.field_of_study ,Brain Mapping ,Human Connectome Project ,General Neuroscience ,Heritability ,Magnetic Resonance Imaging ,Large sample ,medicine.anatomical_structure ,Sample size determination ,Female ,Visual Fields ,Cartography - Abstract
How variable is the functionally-defined structure of early visual areas in human cortex and how much variability is shared between twins? Here we quantify individual differences in the best understood functionally-defined regions of cortex: V1, V2, V3. The Human Connectome Project 7T Retinotopy Dataset includes retinotopic measurements from 181 subjects, including many twins. We trained four “anatomists” to manually define V1-V3 using retinotopic features. These definitions were more accurate than automated anatomical templates and showed that surface areas for these maps varied more than three-fold across individuals. This three-fold variation was little changed when normalizing visual area size by the surface area of the entire cerebral cortex. In addition to varying in size, we find that visual areas vary in how they sample the visual field. Specifically, the cortical magnification function differed substantially among individuals, with the relative amount of cortex devoted to central vision varying by more than a factor of 2. To complement the variability analysis, we examined the similarity of visual area size and structure across twins. Whereas the twin sample sizes are too small to make precise heritability estimates (50 monozygotic pairs, 34 dizygotic pairs), they nonetheless reveal high correlations, consistent with strong effects of the combination of shared genes and environment on visual area size. Collectively, these results provide the most comprehensive account of individual variability in visual area structure to date, and provide a robust population benchmark against which new individuals and developmental and clinical populations can be compared.Significance StatementAreas V1, V2, and V3 are among the best studied functionally-defined regions in human cortex. Using the largest retinotopy dataset to date, we characterized the variability of these regions across individuals and the similarity between twin pairs. We find that the size of visual areas varies dramatically (up to 3.5x) across healthy young adults, far more than the variability of the cerebral cortex size as a whole. Much of this variability appears to arise from inherited factors, as we find very high correlations in visual area size between monozygotic twin-pairs, and lower but still substantial correlations between dizygotic twin pairs. These results provide the most comprehensive assessment of how functionally defined visual cortex varies across the population to date.
- Published
- 2022
83. Novel Volumetric and Morphological Parameters Derived from Three-dimensional Virtual Modeling to Improve Comprehension of Tumor’s Anatomy in Patients with Renal Cancer
- Author
-
Lorenzo Bianchi, Giulia Carpani, Francesco Chessa, Alessandro Bertaccini, E. Balestrazzi, Emanuela Marcelli, A. Mottaran, Eugenio Brunocilla, Rita Golfieri, Laura Cercenelli, Alberta Cappelli, Pietro Piazza, Francesco V. Costa, Barbara Bortolani, Arianna Rustici, Riccardo Schiavina, Caterian Gaudiano, Sara Boschi, Matteo Droghetti, E. Molinaroli, Bianchi, Lorenzo, Schiavina, Riccardo, Bortolani, Barbara, Cercenelli, Laura, Gaudiano, Caterian, Mottaran, Angelo, Droghetti, Matteo, Chessa, Francesco, Boschi, Sara, Molinaroli, Enrico, Balestrazzi, Eleonora, Costa, Francesco, Rustici, Arianna, Carpani, Giulia, Piazza, Pietro, Cappelli, Alberta, Bertaccini, Alessandro, Golfieri, Rita, Marcelli, Emanuela, and Brunocilla, Eugenio
- Subjects
medicine.medical_specialty ,Urology ,medicine.medical_treatment ,Urinary system ,Kidney Volume ,Logistic regression ,Nephrectomy ,Robotic Surgical Procedures ,medicine.artery ,Linear regression ,medicine ,Humans ,Warm Ischemia ,Renal artery ,Univariate analysis ,business.industry ,Robot-assisted partial nephrectomy ,Three-dimensional parameters ,Kidney Neoplasms ,Renal cancer ,Sample size determination ,Radiology ,Three-dimensional modeling ,Comprehension ,business ,Complication - Abstract
Background Three-dimensional (3D) models improve the comprehension of renal anatomy. Objective To evaluate the impact of novel 3D-derived parameters, to predict surgical outcomes after robot-assisted partial nephrectomy (RAPN). Design, setting, and participants Sixty-nine patients with cT1-T2 renal mass scheduled for RAPN were included. Three-dimensional virtual modeling was achieved from computed tomography. The following volumetric and morphological 3D parameters were calculated: VT (volume of the tumor); VT/VK (ratio between tumor volume and kidney volume); CSA3D (ie, contact surface area); UCS3D (contact to the urinary collecting system); Tumor-Artery3D: tumor’s blood supply by tertiary segmental arteries (score = 1), secondary segmental artery (score = 2), or primary segmental/main renal artery (scoren = 3); ST (tumor’s sphericity); ConvT (tumor’s convexity); and Endophyticity3D (ratio between the CSA3D and the global tumor surface). Intervention RAPN with a 3D model. Outcome measurements and statistical analysis Three-dimensional parameters were compared between patients with and without complications. Univariate logistic regression was used to predict overall complications and type of clamping; linear regression was used to predict operative time, warm ischemia time, and estimated blood loss. Results and limitations Overall, 11 (15%) individuals experienced overall complications (7.2% had Clavien ≥3 complications). Patients with urinary collecting system (UCS) involvement at 3D model (UCS3D = 2), tumor with blood supply by primary or secondary segmentary arteries (Tumor-Artery3D = 1 and 2), and high Endophyticity3D values had significantly higher rates of overall complications (all p ≤ 0.03). At univariate analysis, UCS3D, Tumor-Artery3D, and Endophyticity3D are significantly associated with overall complications; CSA3D and Endophyticity3D were associated with warm ischemia time; and CSA3D was associated with selective clamping (all p ≤ 0.03). Sample size and the lack of interobserver variability are the main limits. Conclusions Three-dimensional modeling provides novel volumetric and morphological parameters to predict surgical outcomes after RAPN. Patient summary Novel morphological and volumetric parameters can be derived from a three-dimensional model to describe surgical complexity of renal mass and to predict surgical outcomes after robot-assisted partial nephrectomy.
- Published
- 2022
84. Technologies for Fever Screening in the Time of COVID-19: A Review
- Author
-
Scott Adams, Tracey Bucknall, Abbas Z. Kouzani, and Andrew Valentine
- Subjects
2019-20 coronavirus outbreak ,education.field_of_study ,medicine.medical_specialty ,Coronavirus disease 2019 (COVID-19) ,business.industry ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,Population ,Medical practice ,Sample size determination ,Inclusion and exclusion criteria ,Research studies ,medicine ,Medical physics ,Electrical and Electronic Engineering ,education ,business ,Instrumentation - Abstract
During the COVID-19 pandemic, there has been an increasing rollout of non-contact fever screening solutions to assist in curbing the spread of disease. This study begins by describing how screening for disease has historically been performed. It proposes four measurement characteristics of an ideal screening solution: non-contact, effective, rapid and low-cost measurements. Next, it reviews the existing literature on fever-screening using non-contact infrared thermometer (NCIT) devices as well as infrared thermography (IRT) devices, as these are two technologies which have experienced increasing use. For this review, 185 research papers were identified, 21 research studies were included after inclusion and exclusion criteria were applied. A total of 35 experiments were identified for analysis and their results tabulated. Of these studies, 66% are IRT and 34% are NCIT, with a median sample size of 430 subjects. 26 experiments involve febrile participants, with a median febrile percentage of 11.22 % of population. The reported sensitivity of febrile detection using NCIT varies from 3.7% to 97% and when using IRT it varies from 15% to 100%. Both indoor and outdoor studies are investigated, as well as those conducted in acute and non-acute settings. The results of this review show a clear lack of consensus on the effectiveness of these systems. Overall, these results indicate that sensitivity and specificity are reduced when using IRT and NCIT technologies compared to other thermometers used in medical practice. Their use should be carefully assessed based on the risks present in each particular measurement scenario. CCBY
- Published
- 2022
85. Omitted Variable Bias of Lasso-Based Inference Methods: A Finite Sample Analysis
- Author
-
Kaspar Wüthrich and Ying Zhu
- Subjects
Economics and Econometrics ,Economics ,Econometrics (econ.EM) ,Inference ,Mathematics - Statistics Theory ,Omitted-variable bias ,Sample (statistics) ,Statistics Theory (math.ST) ,Statistics::Computation ,FOS: Economics and business ,Statistics::Machine Learning ,Lasso (statistics) ,Sample size determination ,Applied Economics ,Statistics ,FOS: Mathematics ,Statistics::Methodology ,Econometrics ,Social Sciences (miscellaneous) ,Economics - Econometrics ,Mathematics - Abstract
We study the finite sample behavior of Lasso-based inference methods such as post double Lasso and debiased Lasso. We show that these methods can exhibit substantial omitted variable biases (OVBs) due to Lasso not selecting relevant controls. This phenomenon can occur even when the coefficients are sparse and the sample size is large and larger than the number of controls. Therefore, relying on the existing asymptotic inference theory can be problematic in empirical applications. We compare the Lasso-based inference methods to modern high-dimensional OLS-based methods and provide practical guidance., Comment: Final author version, accepted at The Review of Economics and Statistics
- Published
- 2023
86. Cortical Motor Planning and Biomechanical Stability During Unplanned Jump Landings in Men With Anterior Cruciate Ligament Reconstruction
- Author
-
Daniel Niederer, Solveig Vieluf, Jan Wilke, Florian Giesche, Tobias Engeroff, and Winfried Banzer
- Subjects
Adult ,Male ,medicine.medical_specialty ,Knee Joint ,Anterior cruciate ligament ,Movement ,Physical Therapy, Sports Therapy and Rehabilitation ,Context (language use) ,Electroencephalography ,Young Adult ,Physical medicine and rehabilitation ,medicine ,Humans ,Orthopedics and Sports Medicine ,Knee ,Ground reaction force ,medicine.diagnostic_test ,Anterior Cruciate Ligament Reconstruction ,business.industry ,Anterior Cruciate Ligament Injuries ,General Medicine ,Anticipation ,Biomechanical Phenomena ,medicine.anatomical_structure ,Cross-Sectional Studies ,Sample size determination ,business ,Neurocognitive ,Center of pressure (fluid mechanics) - Abstract
Context Athletes with anterior cruciate ligament (ACL) reconstruction (ACLR) exhibit increased cortical motor planning during simple sensorimotor tasks compared with healthy athletes serving as control groups. This may interfere with proper decision making during time-constrained movements, elevating the reinjury risk. Objective To compare cortical motor planning and biomechanical stability during jump landings between participants with ACLR and healthy individuals. Design Cross-sectional study. Setting Laboratory. Patients or Other Participants Ten men with ACLR (age = 28 ± 4 years, time after surgery = 63 ± 35 months) and 17 healthy men (age = 28 ± 4 years) completed 43 ± 4 preplanned (landing leg shown before takeoff) and 51 ± 5 unplanned (visual cue during flight) countermovement jumps with single-legged landings. Main Outcome Measure(s) Movement-related cortical potentials (MRCPs) and frontal θ frequency power before the jump were analyzed using electroencephalography. Movement-related cortical potentials were subdivided into 3 successive 0.5-second time periods (readiness potential [RP]-1, RP-2, and negative slope [NS]) relative to movement onset, with higher values indicating more motor planning. Theta power was calculated for the last 0.5 second before movement onset, with higher values demonstrating more focused attention. Biomechanical landing stability was measured via peak vertical ground reaction force, time to stabilization, and center of pressure. Results Both the ACLR and healthy groups evoked MRCPs at all 3 time periods. During the unplanned task analyzed using P values and Cohen d, the ACLR group exhibited slightly higher but not different MRCPs, achieving medium effect sizes (RP-1: P = .25, d = 0.44; RP-2: P = .20, d = 0.53; NS: P = .28, d = 0.47). The ACLR group also showed slightly higher θ power values that were not different during the preplanned (P = .18, d = 0.29) or unplanned (P = .42, d = 0.07) condition, achieving small effect sizes. The groups did not differ in their biomechanical outcomes (P values > .05). No condition × group interactions occurred (P values > .05). Conclusions Our jump-landing task evoked MRCPs. Although not different between groups, the observed effect sizes provided the first indication that men with ACLR might have consistently relied on more cortical motor planning associated with unplanned jump landings. Confirmatory studies with larger sample sizes are warranted.
- Published
- 2023
87. Analisis Hubungan Perilaku Merokok dengan Obesitas Sentral Pada Orang Dewasa Sehat di Suradadi Kabupaten Tegal
- Author
-
Eva Novita Sari, Ratih Sakti Prastiwi, and Agus Susanto
- Subjects
Research design ,Waist ,central obesity ,business.industry ,medicine.disease ,healthy adult ,Obesity ,smoking behavior ,Stratified sampling ,Smoking behavior ,Sample size determination ,Environmental health ,medicine ,Observational study ,Public Health ,business ,Socioeconomic status - Abstract
Central obesity is influenced by many factors such as changes in age, gender, economic status, life habits including lack of physical activity, low fiber consumption, consumption of simple carbohydrates, consumption of fatty foods, and smoking behavior. Smoking behavior is thought to be a significant factor in forming central obesity in adult males. The purpose of this study was to determine the relationship between smoking behavior and central obesity in healthy adults. This study used an analytic observational design with a cross-sectional research design. The research was conducted in Suradadi Village, Suradadi District, Tegal Regency. Sampling was carried out by stratified sampling in November 2020 – January 2021. The sample size was 90 men aged 25-60 years. Data were collected by measuring the waist circumference of the respondents and recording other personal data such as age, occupation, and smoking status. Data processing is done descriptively and cross-tabulation. Testing the relationship between smoking behavior and obesity was carried out using the chi-square statistical test with the help of the SPSS version 22 application. The majority of respondents in this study had a smoking status of 68.9% and did not have central obesity 64.4%. The results of the test of the relationship between smoking status and central obesity got a p-value of 0.813. There is no significant relationship between smoking status and central obesity.
- Published
- 2023
88. Applications of Probability of Study Success in Clinical Drug Development
- Author
-
Wang, Ming-Dauh, Chen, Jiahua, Series editor, Chen, Ding-Geng (Din), Series editor, Chen, Zhen, editor, Liu, Aiyi, editor, Qu, Yongming, editor, Tang, Larry, editor, Ting, Naitee, editor, and Tsong, Yi, editor
- Published
- 2015
- Full Text
- View/download PDF
89. Sigma Estimation
- Author
-
Muralidharan, K. and Muralidharan, K.
- Published
- 2015
- Full Text
- View/download PDF
90. Financial education affects financial knowledge and downstream behaviors
- Author
-
Carly Urban, Lukas Menkhoff, Tim Kaiser, and Annamaria Lusardi
- Subjects
Finance ,Economics and Econometrics ,business.industry ,Randomized experiment ,Strategy and Management ,Psychological intervention ,Publication bias ,law.invention ,Randomized controlled trial ,Sample size determination ,law ,Accounting ,Meta-analysis ,Financial literacy ,business ,Downstream (petroleum industry) - Abstract
We study the rapidly growing literature on the causal effects of financial education programs in a meta-analysis of 76 randomized experiments with a total sample size of over 160,000 individuals. Many of these experiments are published in top economics and finance journals. The evidence shows that financial education programs have, on average, positive causal treatment effects on financial knowledge and downstream financial behaviors. Treatment effects are economically meaningful in size, similar to those realized by educational interventions in other domains, and robust to accounting for publication bias in the literature. We also discuss the cost-effectiveness of financial education interventions.
- Published
- 2022
91. Adaptive Huber regression on Markov-dependent data
- Author
-
Yongyi Guo, Bai Jiang, and Jianqing Fan
- Subjects
FOS: Computer and information sciences ,Statistics and Probability ,Robustification ,Markov chain ,Applied Mathematics ,010102 general mathematics ,Mathematics - Statistics Theory ,Statistics Theory (math.ST) ,01 natural sciences ,Regression ,Methodology (stat.ME) ,010104 statistics & probability ,Huber loss ,Sample size determination ,Modeling and Simulation ,Linear regression ,FOS: Mathematics ,Applied mathematics ,Spectral gap ,0101 mathematics ,Statistics - Methodology ,Curse of dimensionality ,Mathematics - Abstract
High-dimensional linear regression has been intensively studied in the community of statistics in the last two decades. For the convenience of theoretical analyses, classical methods usually assume independent observations and sub-Gaussian-tailed errors. However, neither of them hold in many real high-dimensional time-series data. Recently [Sun, Zhou, Fan, 2019, J. Amer. Stat. Assoc., in press] proposed Adaptive Huber Regression (AHR) to address the issue of heavy-tailed errors. They discover that the robustification parameter of the Huber loss should adapt to the sample size, the dimensionality, and the moments of the heavy-tailed errors. We progress in a vertical direction and justify AHR on dependent observations. Specifically, we consider an important dependence structure -- Markov dependence. Our results show that the Markov dependence impacts on the adaption of the robustification parameter and the estimation of regression coefficients in the way that the sample size should be discounted by a factor depending on the spectral gap of the underlying Markov chain.
- Published
- 2022
92. The association of changes of sleep architecture related to donepezil: A systematic review and meta-analysis
- Author
-
Chung-Yao Hsu, Ching-Kuan Liu, Cheng-Fang Hsieh, Pao-Yen Lin, Ping-Tao Tseng, Bo-Lin Ho, Tien-Yu Chen, and Yen-Wen Chen
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,Polysomnography ,General Medicine ,Placebo ,Sleep architecture ,Sleep in non-human animals ,Physical medicine and rehabilitation ,Piperidines ,Sample size determination ,Meta-analysis ,Indans ,mental disorders ,medicine ,Humans ,Donepezil ,Sleep ,business ,Association (psychology) ,medicine.drug - Abstract
BACKGROUND Donepezil had been recognized to have impact on sleep quality in demented patients. However, there was insufficient evidences about the actual effect of donepezil in the sleep architectures. Our meta-analysis aimed to evaluate the changes of sleep architectures related to donepezil use. METHODS Followed the PRISMA2020 and AMSTAR2 guidelines, electronic search had been performed on the databases of PubMed, Embase, ScienceDirect, ClinicalKey, Cochrane CENTRAL, ProQuest, Web of Science, and ClinicalTrials.gov. The outcome measurement was changes of sleep parameters detected by polysomnography. A random-effects meta-analysis was conducted. RESULTS Total twelve studies had been involved. The percentage of REM sleep would significantly increase after donepezil treatment (Hedges' g = 0.694, p
- Published
- 2022
93. Population Pharmacokinetics of Levetiracetam: A Systematic Review
- Author
-
Janthima Methaneethorn and Nattawut Leelakanok
- Subjects
education.field_of_study ,medicine.medical_specialty ,Levetiracetam ,business.industry ,Body Weight ,Population ,Infant, Newborn ,Postmenstrual Age ,Renal function ,Population pharmacokinetics ,Kinetics ,Pharmacokinetics ,Research Design ,Sample size determination ,Pharmacodynamics ,Internal medicine ,medicine ,Humans ,Anticonvulsants ,Pharmacology (medical) ,General Pharmacology, Toxicology and Pharmaceutics ,education ,business ,medicine.drug - Abstract
Background: The use of levetiracetam (LEV) has been increasing given its favorable pharmacokinetic profile. Numerous population pharmacokinetic studies for LEV have been conducted. However, there are some discrepancies regarding factors affecting its pharmacokinetic variability. Therefore, this systematic review aimed to summarize significant predictors for LEV pharmacokinetics as well as the need for dosage adjustments. Methods: We performed a systematic search for population pharmacokinetic studies of LEV conducted using a nonlinear-mixed effect approach from PubMed, Scopus, CINAHL Complete, and Science Direct databases from their inception to March 2020. Information on study design, model methodologies, significant covariate-parameter relationships, and model evaluation was extracted. The quality of the reported studies was also assessed. Results: A total of 16 studies were included in this review. Only two studies were conducted with a two-compartment model, while the rest were performed with a one-compartment structure. Bodyweight and creatinine clearance were the two most frequently identified covariates on LEV clearance (CLLEV). Additionally, postmenstrual age (PMA) or postnatal age (PNA) were significant predictors for CLLEV in neonates. Only three studies externally validated the models. Two studies conducted pharmacodynamic models for LEV with relatively small sample size. Conclusion: Significant predictors for LEV pharmacokinetics are highlighted in this review. For future research, a population pharmacokinetic-pharmacodynamic model using a larger sample size should be conducted. From a clinical perspective, the published models should be externally evaluated before clinical implementation.
- Published
- 2022
94. The Relationship of Duration of Hemodialysis with Coping Mechanisms of Chronic Kidney Disease Patients Underwent Hemodialysis
- Author
-
I Gusti Agung Tirta Dewayani, Ketut Lisnawati, and Ika Setya Purwanti
- Subjects
Organizational citizenship behavior ,Survey methodology ,Sample size determination ,Vocational education ,Validity ,Job satisfaction ,General Medicine ,Organizational commitment ,Psychology ,Path analysis (statistics) ,Social psychology - Abstract
Chronic Kidney Disease patients undergoing hemodialysis can cause stress to the patient. The longer the patient suffers from Chronic Kidney Disease, the more the patient has a variety of experiences with stressors due to the disease. These experiences can be used as an anticipatory effort in dealing with stressors experienced by patients. The purpose of this was the relationship between the duration of hemodialysis and the coping mechanism of patients with chronic kidney disease undergoing hemodialysis. This study used descriptive quantitative, with a correlation design and a cross-sectional approach. The sample was 111 patients with chronic kidney disease undergoing hemodialysis. Sampling technique used the purposive sampling technique. The data collection instrument used a jalowiec coping scale questionnaire and length of underwent hemodialysis. The results showed that most respondents underwent hemodialysis in the category
- Published
- 2022
95. On sampled metrics for item recommendation
- Author
-
Walid Krichene and Steffen Rendle
- Subjects
Mean squared error ,General Computer Science ,Computer science ,Sampling (statistics) ,Context (language use) ,Sample (statistics) ,02 engineering and technology ,Set (abstract data type) ,Ranking ,Sample size determination ,020204 information systems ,Statistics ,Metric (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing - Abstract
Recommender systems personalize content by recommending items to users. Item recommendation algorithms are evaluated by metrics that compare the positions of truly relevant items among the recommended items. To speed up the computation of metrics, recent work often uses sampled metrics where only a smaller set of random items and the relevant items are ranked. This paper investigates such sampled metrics in more detail and shows that they are inconsistent with their exact counterpart, in the sense that they do not persist relative statements, for example, recommender A is better than B , not even in expectation. Moreover, the smaller the sample size, the less difference there is between metrics, and for very small sample size, all metrics collapse to the AUC metric. We show that it is possible to improve the quality of the sampled metrics by applying a correction, obtained by minimizing different criteria. We conclude with an empirical evaluation of the naive sampled metrics and their corrected variants. To summarize, our work suggests that sampling should be avoided for metric calculation, however if an experimental study needs to sample, the proposed corrections can improve the quality of the estimate.
- Published
- 2022
96. Voting: A machine learning approach
- Author
-
László Szepesváry, Clemens Puppe, Dávid Burka, and Attila Tasnádi
- Subjects
Information Systems and Management ,General Computer Science ,Artificial neural network ,Computer science ,business.industry ,Learnability ,media_common.quotation_subject ,Rank (computer programming) ,Sample (statistics) ,Management Science and Operations Research ,Condorcet method ,Machine learning ,computer.software_genre ,Industrial and Manufacturing Engineering ,Sample size determination ,Modeling and Simulation ,Voting ,Artificial intelligence ,business ,computer ,Axiom ,media_common - Abstract
Voting rules can be assessed from quite different perspectives: the axiomatic, the pragmatic, in terms of computational or conceptual simplicity, susceptibility to manipulation, and many others aspects. In this paper, we take the machine learning perspective and ask how prominent voting rules compare in terms of their learnability by a neural network. To address this question, we train the neural network to choosing Condorcet, Borda, and plurality winners, respectively. Remarkably, our statistical results show that, when trained on a limited (but still reasonably large) sample, the neural network mimics most closely the Borda rule, no matter on which rule it was previously trained. The main overall conclusion is that the necessary training sample size for a neural network varies significantly with the voting rule, and we rank a number of popular voting rules in terms of the sample size required.
- Published
- 2022
97. Sample size determination for time-to-event endpoints in randomized selection trials with generalized exponential distribution.
- Author
-
Akbar MH, Ali S, Shah I, and Alqifari HN
- Abstract
Randomized selection trials are frequently used to compare experimental treatments that have the potential to be beneficial, but they often do not include a control group. While time-to-event endpoints are commonly applied in clinical investigations, methodologies for determining the required sample size for such endpoints, except exponential distribution, are lacking. In recent times, there has been a shift in clinical trials, with a growing emphasis on progression-free survival as a primary endpoint. However, the utilization of this measure has typically been restricted to specific time points for both sample size determination and analysis. This alteration in approach could wield a substantial influence on the clinical trial process, potentially diminishing the capacity to discern variances between treatment groups. In the calculation of sample sizes for randomized trials, this investigation operates under the assumption that the time-to-event endpoint conforms to either an exponential, Weibull, or generalized exponential distribution., Competing Interests: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (© 2024 The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
98. Sample size requirements for a survival study
- Author
-
David Collett
- Subjects
Sample size determination ,Survival study ,Statistics ,Biology - Published
- 2023
99. Sample size in clinical protocols
- Author
-
Alvar Loria
- Subjects
Sample size determination ,Statistics ,General Medicine ,Mathematics - Published
- 2023
100. Sample Size Determination in Epidemiological Studies
- Author
-
Elashoff, Janet D, Lemeshow, Stanley, Ahrens, Wolfgang, editor, and Pigeot, Iris, editor
- Published
- 2014
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.