3,644 results on '"Biometry"'
Search Results
2. Characterizing nursing time with patients using computer vision.
- Author
-
Sun, Carolyn, Fu, Caroline, and Cato, Kenrick
- Subjects
- *
NURSING audit , *NURSE-patient relationships , *FACILITATED communication , *DATA analysis , *RESEARCH funding , *PILOT projects , *HOSPITAL nursing staff , *HOSPITAL patients , *EVALUATION of medical care , *NURSING , *DESCRIPTIVE statistics , *MANN Whitney U Test , *HOSPITAL rounds , *ROOMS , *COMMUNICATION , *STATISTICS , *MEDICAL-surgical nurses , *COMPUTER input-output equipment , *TIME , *SHIFT systems - Abstract
Background: Compared to other providers, nurses spend more time with patients, but the exact quantity and nature of those interactions remain largely unknown. The purpose of this study was to characterize the interactions of nurses at the bedside using continuous surveillance over a year long period. Methods: Nurses' time and activity at the bedside were characterized using a device that integrates the use of obfuscated computer vision in combination with a Bluetooth beacon on the nurses' identification badge to track nurses' activities at the bedside. The surveillance device (AUGi) was installed over 37 patient beds in two medical/surgical units in a major urban hospital. Forty‐nine nurse users were tracked using the beacon. Data were collected 4/15/19–3/15/20. Statistics were performed to describe nurses' time and activity at the bedside. Results: A total of n = 408,588 interactions were analyzed over 670 shifts, with >1.5 times more interactions during day shifts (n = 247,273) compared to night shifts (n = 161,315); the mean interaction time was 3.34 s longer during nights than days (p < 0.0001). Each nurse had an average of 7.86 (standard deviation [SD] = 10.13) interactions per bed each shift and a mean total interaction time per bed of 9.39 min (SD = 14.16). On average, nurses covered 7.43 beds (SD = 4.03) per shift (day: mean = 7.80 beds/nurse/shift, SD = 3.87; night: mean = 7.07/nurse/shift, SD = 4.17). The mean time per hourly rounding (HR) was 69.5 s (SD = 98.07) and 50.1 s (SD = 56.58) for bedside shift report. Discussion: As far as we are aware, this is the first study to provide continuous surveillance of nurse activities at the bedside over a year long period, 24 h/day, 7 days/week. We detected that nurses spend less than 1 min giving report at the bedside, and this is only completed 20.7% of the time. Additionally, hourly rounding was completed only 52.9% of the time and nurses spent only 9 min total with each patient per shift. Further study is needed to detect whether there is an optimal timing or duration of interactions to improve patient outcomes. Clinical Relevance: Nursing time with the patient has been shown to improve patient outcomes but precise information about how much time nurses spend with patients has been heretofore unknown. By understanding minute‐by‐minute activities at the bedside over a full year, we provide a full picture of nursing activity; this can be used in the future to determine how these activities affect patient outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Physiological evidence of escalating stress during COVID-19: a longitudinal assessment of child welfare workers.
- Author
-
Griffiths, Austin, Link, Kim, Haughtigan, Kara, Beer, Oliver W. J., Powell, Lindsey, and Royse, David
- Subjects
- *
PHYSIOLOGICAL stress , *BIOMARKERS , *STATISTICS , *PILOT projects , *WELL-being , *AUTONOMIC nervous system , *SOCIAL workers , *JOB stress , *ONE-way analysis of variance , *FISHER exact test , *LABOR turnover , *CHILD welfare , *RESEARCH funding , *HEART beat , *DESCRIPTIVE statistics , *REPEATED measures design , *BIOMETRY , *DATA analysis software , *DATA analysis , *COVID-19 pandemic , *EMPLOYEE retention - Abstract
Studies have shown that stress has contributed to employee turnover and retention problems for agencies, and at the individual level, chronic stress has been associated with coronary heart disease, anxiety, depression, and many other negative effects. In the past, the extent of stress one has felt has been measured by subjective paper-and-pencil instruments; however, recent technological advances have improved our ability to obtain accurate biofeedback assessments from wearable instruments. The Kentucky Child Welfare Workforce Wellness Initiative is the first known study to explore physiological stress in a sample (n = 32) of child welfare professionals using biometric technology (Firstbeat Bodyguard 2) and the first to report that data longitudinally over a four-month period. The study revealed that a variable associated with the strength of the Autonomic Nervous System (RMSSD) remained below the norms for a healthy population as participants experienced consistent and prolonged physiological stress. When examined relatively to the agency's lifting of COVID restrictions and returning to face-to-face service delivery, stress levels began to further rise almost to significant levels (p <.10) and the participants' ability to achieve a state of physiological relaxation significantly decreased. Future research employing biometric technology is also suggested. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Ethno-racial identity and digitalisation in self-presentation: a large-scale Instagram content analysis.
- Author
-
Bij de Vaate, Nadia A. J. D., Veldhuis, Jolanda, and Konijn, Elly A.
- Subjects
- *
STATISTICS , *SAMPLE size (Statistics) , *SELF-perception , *SOCIAL media , *ATTITUDE (Psychology) , *BLACK people , *HISPANIC Americans , *ONE-way analysis of variance , *RACE , *GROUP identity , *POPULATION geography , *ETHNOLOGY research , *PEARSON correlation (Statistics) , *PSYCHOSOCIAL factors , *PHOTOGRAPHY , *CHI-squared test , *DESCRIPTIVE statistics , *RESEARCH funding , *ETHNIC groups , *BIOMETRY , *WHITE people , *DATA analysis , *MEDICAL coding - Abstract
This study addresses the question to which extent individual online self-presentations become more similar globally, due globalisation and digitalisation, or whether ethno-racial identity predisposes individuals' online self-presentation. That is, we examined the degree to which individuals varying in ethno-racial identity converge or diverge in online self-presentation. A large-scale content analysis was conducted by collecting selfies on Instagram (i.e. #selfietime; N = 3881). Using facial recognition software, selfies were allotted into a specific ethno-racial identity based on race/ethnicity-related appearance features (e.g. Asian, Black, Hispanic, and White identity) as a proxy for externally imposed ethno-racial identity. Results provided some evidence for convergence in online self-construction among selfie-takers, but generally revealed that self-presentations diverge as a function of ethno-racial identity. That is, results showed more convergence between ethno-racial identity for portraying selfies with objectified elements, whereas divergence in online self-presentations occurred for portraying contextualised selves and filter usage. In all, this study examined the complexity of online self-presentation. Here, we extend earlier cross-cultural research by exploring the convergence-divergence paradigm for the role of externally imposed ethno-racial identity in online self-presentation. Findings imply that ethno-racial identity characteristics remain important in manifestations of online self-presentations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. The Translation of Research Central Tendencies in Clinical Practice: Power and Limitations of Statistics.
- Author
-
Haddad, Ramzi, Al Maali, Suzanna, and Saadeh, Maria
- Subjects
STATISTICS ,EXPERIMENTAL design ,BIOMETRY ,MEDICAL research ,ORTHODONTICS - Abstract
The clinician's ability to provide evidence-based orthodontics is directly related to their adeptness at dissecting published research, which in turn is heavily dependent on the proper understanding of the tenets of research design and biostatistics. Using relevant clinical examples, we discuss key principles affecting the translation of the results of single research studies into clinical practice, including effect modifiers and confounders, study design, central tendency and outliers. The application of the same scrutiny to higher level meta-epidemiological evidence is also illustrated in addition to a detailed discussion of the applicability and limitations of prediction studies. We conclude by highlighting how the particulars of publishing undermine the ultimate goal of transparency in dissemination of research findings and how they may restrict the ability to critically scrutinize and build upon seminal landmark studies that, just like other research, are limited by the original research design and statistical protocols employed. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Bayesian Statistics Improves Biological Interpretability of Metabolomics Data from Human Cohorts.
- Author
-
Brydges, Christopher, Che, Xiaoyu, Lipkin, Walter Ian, and Fiehn, Oliver
- Subjects
BIOMETRY ,CHRONIC fatigue syndrome ,METABOLOMICS ,LATENT structure analysis ,NULL hypothesis ,STATISTICAL power analysis ,ETHER lipids - Abstract
Univariate analyses of metabolomics data currently follow a frequentist approach, using p-values to reject a null hypothesis. We here propose the use of Bayesian statistics to quantify evidence supporting different hypotheses and discriminate between the null hypothesis versus the lack of statistical power. We used metabolomics data from three independent human cohorts that studied the plasma signatures of subjects with myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS). The data are publicly available, covering 84–197 subjects in each study with 562–888 identified metabolites of which 777 were common between the two studies and 93 were compounds reported in all three studies. We show how Bayesian statistics incorporates results from one study as "prior information" into the next study, thereby improving the overall assessment of the likelihood of finding specific differences between plasma metabolite levels. Using classic statistics and Benjamini–Hochberg FDR-corrections, Study 1 detected 18 metabolic differences and Study 2 detected no differences. Using Bayesian statistics on the same data, we found a high likelihood that 97 compounds were altered in concentration in Study 2, after using the results of Study 1 as the prior distributions. These findings included lower levels of peroxisome-produced ether-lipids, higher levels of long-chain unsaturated triacylglycerides, and the presence of exposome compounds that are explained by the difference in diet and medication between healthy subjects and ME/CFS patients. Although Study 3 reported only 92 compounds in common with the other two studies, these major differences were confirmed. We also found that prostaglandin F2alpha, a lipid mediator of physiological relevance, was reduced in ME/CFS patients across all three studies. The use of Bayesian statistics led to biological conclusions from metabolomic data that were not found through frequentist approaches. We propose that Bayesian statistics is highly useful for studies with similar research designs if similar metabolomic assays are used. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Toward standardization of statistical reporting in studies on entheseal changes.
- Author
-
van der Pas, Stéphanie and Schrader, Sarah
- Subjects
- *
STANDARDIZATION , *DEGREES of freedom , *STATISTICS , *BIOMETRY - Abstract
Statistical analysis, while at first glance an objective way to extract insights from data, remains at its core a human endeavor. Elements of subjectivity are introduced by the many decisions that go into the selection of a statistical method. Such subjectivity may harm the evidentiary value of results from statistical analyses. Standardization of statistical methods decreases the degrees of freedom available to researchers and may thus be seen as a way to increase the objectivity of the analysis. Here, we argue that standardization of methods is not only impossible because statistical methods rely on assumptions that need to be considered on a case‐by‐case basis but also undesirable because it may block innovation. We propose that the entheseal changes field is better served by standardization of reporting and discuss how reporting guidelines may be developed based on examples from biostatistics. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Special issue adaptive tools for resilient bones: Biostatistical approaches to past physical activity in osteoarchaeology.
- Author
-
Schrader, Sarah A. and Carballo Pérez, Jared
- Subjects
- *
PHYSICAL activity , *BIOMETRY , *ARCHAEOLOGICAL human remains - Abstract
In this introduction to the special issue, Adaptive Tools for Resilient Bones: Biostatistical Approaches to Past Physical Activity in Osteoarchaeology, we discuss the outcome of the workshop held in Leiden (the Netherlands; November 18–19, 2021). We review statistical approaches to entheseal changes and present a series of new contributions to this field. These research, commentary, and review articles present different statistical approaches to entheseal changes and reflect the current state of research in the field. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Overdispersion models for correlated multinomial data: Applications to blinding assessment
- Author
-
Landsman, V, Landsman, D, Li, CS, and Bang, H
- Subjects
Mathematical Sciences ,Statistics ,Biometry ,Cluster Analysis ,Computer Simulation ,Humans ,Likelihood Functions ,Mental Disorders ,Meta-Analysis as Topic ,Models ,Statistical ,Neck Pain ,Randomized Controlled Trials as Topic ,Research Design ,blinding index ,Dirichlet-multinomial ,GEE ,meta-analysis ,Public Health and Health Services ,Statistics & Probability ,Epidemiology - Abstract
Overdispersion models have been extensively studied for correlated normal and binomial data but much less so for correlated multinomial data. In this work, we describe a multinomial overdispersion model that leads to the specification of the first two moments of the outcome and allows the estimation of the global parameters using generalized estimating equations (GEE). We introduce a Global Blinding Index as a target parameter and illustrate the application of the GEE method to its estimation from (1) a clinical trial with clustering by practitioner and (2) a meta-analysis on psychiatric disorders. We examine the impact of a small number of clusters, high variability in cluster sizes, and the magnitude of the intraclass correlation on the performance of the GEE estimators of the Global Blinding Index using the data simulated from different models. We compare these estimators with the inverse-variance weighted estimators and a maximum-likelihood estimator, derived under the Dirichlet-multinomial model. Our results indicate that the performance of the GEE estimators was satisfactory even in situations with a small number of clusters, whereas the inverse-variance weighted estimators performed poorly, especially for larger values of the intraclass correlation coefficient. Our findings and illustrations may be instrumental for practitioners who analyze clustered multinomial data from clinical trials and/or meta-analysis.
- Published
- 2019
10. 异源虹膜块状特征相似性统计.
- Author
-
张翌阳, 唐云祁, 陈子龙, and 苗迪
- Subjects
IRIS recognition ,DATABASES ,PUBLIC safety ,FORENSIC sciences ,STATISTICS ,BIOMETRY - Abstract
Copyright of Forensic Science & Technology is the property of Institute of Forensic Science, Ministry of Public Security and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
11. Semantic wikis as flexible database interfaces for biomedical applications.
- Author
-
Falda, Marco, Atzori, Manfredo, and Corbetta, Maurizio
- Subjects
- *
WIKIS , *DATABASES , *MEDICAL geography , *PHYSICIANS , *STATISTICS , *BIOMETRY - Abstract
Several challenges prevent extracting knowledge from biomedical resources, including data heterogeneity and the difficulty to obtain and collaborate on data and annotations by medical doctors. Therefore, flexibility in their representation and interconnection is required; it is also essential to be able to interact easily with such data. In recent years, semantic tools have been developed: semantic wikis are collections of wiki pages that can be annotated with properties and so combine flexibility and expressiveness, two desirable aspects when modeling databases, especially in the dynamic biomedical domain. However, semantics and collaborative analysis of biomedical data is still an unsolved challenge. The aim of this work is to create a tool for easing the design and the setup of semantic databases and to give the possibility to enrich them with biostatistical applications. As a side effect, this will also make them reproducible, fostering their application by other research groups. A command-line software has been developed for creating all structures required by Semantic MediaWiki. Besides, a way to expose statistical analyses as R Shiny applications in the interface is provided, along with a facility to export Prolog predicates for reasoning with external tools. The developed software allowed to create a set of biomedical databases for the Neuroscience Department of the University of Padova in a more automated way. They can be extended with additional qualitative and statistical analyses of data, including for instance regressions, geographical distribution of diseases, and clustering. The software is released as open source-code and published under the GPL-3 license at https://github.com/mfalda/tsv2swm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Wissenschaftsausbildung im Medizinstudium: Das Oldenburger Datenanalyseprojekt als Umsetzungsbeispiel [Lessons learned].
- Author
-
Timmer, Antje, Neuser, Johanna, Uslar, Verena, Kappen, Sanny, Seipp, Alexander, Tiles-Sar, Natalia, de Sordi, Dominik, Beckhaus, Julia, and Otto-Sobotka, Fabian
- Subjects
- *
STATISTICS , *EVALUATION of human services programs , *TEACHING methods , *MEDICAL students , *CURRICULUM , *LABOR supply , *LEARNING strategies , *ABILITY , *TRAINING , *MEDICAL schools , *OUTCOME-based education , *DESCRIPTIVE statistics , *DATA analysis , *BIOMETRY , *WRITTEN communication , *DATA analysis software , *MEDICAL education , *SCIENCE ,RESEARCH evaluation - Abstract
Introduction: According to the Master Plan 2020, science education will play a critical role in future medical curricula. Science modules have already been implemented at many locations. Other medical faculties will follow in the next few years, as legislation is expected to make recommendations of the national competence-based learning objectives curriculum for medicine (NKLM) mandatory. This article aims to present an implementation example from epidemiology and biometry as a contribution to the didactic discussions within the data sciences in medicine. Project description: We report on our experiences with a data analysis project for second-year medical students, which has been compulsory at the Faculty of Medicine and Health Sciences since 2019. The project is intended to train the scientific skills required from the subjects of epidemiology and biometry for student research projects. Emphasis is placed on responsible data handling, transparency, and reproducibility. For example, the writing of a statistical analysis plan is required prior to data access. Improved standardization of materials, optional use of the English language, and digital support will be implemented to help manage the project when student numbers increase. Discussion: The experience from five years is very positive, although a formal evaluation of the learning success is still pending. Current challenges concern staffing, additional time and supervision requirements for those students who do statistical programming with R, and improved integration into the medical curriculum. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Time-dynamic profiling with application to hospital readmission among patients on dialysis.
- Author
-
Estes, Jason P, Nguyen, Danh V, Chen, Yanjun, Dalrymple, Lorien S, Rhee, Connie M, Kalantar-Zadeh, Kamyar, and Şentürk, Damla
- Subjects
Humans ,Retinal Perforations ,Patient Readmission ,Risk Factors ,Biometry ,Time Factors ,Outcome Assessment ,Health Care ,End-stage renal disease ,Hospital readmission ,Multilevel varying coefficient models ,Profiling of medical care providers ,United States Renal Data System ,Outcome Assessment ,Statistics ,Statistics & Probability ,Other Mathematical Sciences - Abstract
Standard profiling analysis aims to evaluate medical providers, such as hospitals, nursing homes, or dialysis facilities, with respect to a patient outcome. The outcome, for instance, may be mortality, medical complications, or 30-day (unplanned) hospital readmission. Profiling analysis involves regression modeling of a patient outcome, adjusting for patient health status at baseline, and comparing each provider's outcome rate (e.g., 30-day readmission rate) to a normative standard (e.g., national "average"). Profiling methods exist mostly for non time-varying patient outcomes. However, for patients on dialysis, a unique population which requires continuous medical care, methodologies to monitor patient outcomes continuously over time are particularly relevant. Thus, we introduce a novel time-dynamic profiling (TDP) approach to assess the time-varying 30-day readmission rate. TDP is used to estimate, for the first time, the risk-standardized time-dynamic 30-day hospital readmission rate, throughout the time period that patients are on dialysis. We develop the framework for TDP by introducing the standardized dynamic readmission ratio as a function of time and a multilevel varying coefficient model with facility-specific time-varying effects. We propose estimation and inference procedures tailored to the problem of TDP and to overcome the challenge of high-dimensional parameters when examining thousands of dialysis facilities.
- Published
- 2018
14. Functional Data Analysis in Biomechanics : A Concise Review of Core Techniques, Applications and Emerging Areas
- Author
-
Edward Gunning, John Warmenhoven, Andrew J. Harrison, Norma Bargary, Edward Gunning, John Warmenhoven, Andrew J. Harrison, and Norma Bargary
- Subjects
- Statistics, Quantitative research, Biometry, Sports sciences
- Abstract
This book provides a concise discussion of fundamental functional data analysis (FDA) techniques for analysing biomechanical data, along with an up-to-date review of their applications. The core of the book covers smoothing, registration, visualisation, functional principal components analysis and functional regression, framed in the context of the challenges posed by biomechanical data and accompanied by an extensive case study and reproducible examples using R. This book proposes future directions based on recently published methodological advancements in FDA and emerging sources of data in biomechanics. This is a vibrant research area, at the intersection of applied statistics, or more generally, data science, and biomechanics and human movement research. This book serves as both a contextual literature review of FDA applications in biomechanics and as an introduction to FDA techniques for applied researchers. In particular, it provides a valuable resource for biomechanics researchers seeking to broaden or deepen their FDA knowledge.
- Published
- 2024
15. Modeling Binary Correlated Responses : Using SAS, SPSS, R and STATA
- Author
-
Jeffrey R. Wilson, Kent A. Lorenz, Lori P. Selby, Jeffrey R. Wilson, Kent A. Lorenz, and Lori P. Selby
- Subjects
- Statistics, Biometry
- Abstract
This book is an updated edition of Modeling Binary Correlated Responses Using SAS, SPSS and R, and now it includes the use of STATA. It uses these Statistical tools to analyze correlated binary data, accessible to practitioners in a single volume. Chapters cover recently developed statistical tools and statistical packages, as well as showcase both traditional and new methods for application to health-related research. Data analysis presented in each chapter will provide step-by-step instructions so these new methods can be readily applied to projects. Short tutorials are in the appendix, for readers interested in learning more about the languages. Data and computer programs will be publicly available in order for readers to replicate model development, but learning a new statistical language is not necessary with this book. The inclusion of code for R, SAS, SPSS and STATA, allows for easy implementation by readers. Researchers and graduate students in Statistics, Epidemiology, and Public Health will find this book particularly useful.
- Published
- 2024
16. Flexible Nonparametric Curve Estimation
- Author
-
Hassan Doosti and Hassan Doosti
- Subjects
- Statistics, Biometry
- Abstract
This book delves into the realm of nonparametric estimations, offering insights into essential notions such as probability density, regression, Tsallis Entropy, Residual Tsallis Entropy, and intensity functions. Through a series of carefully crafted chapters, the theoretical foundations of flexible nonparametric estimators are examined, complemented by comprehensive numerical studies. From theorem elucidation to practical applications, the text provides a deep dive into the intricacies of nonparametric curve estimation. Tailored for postgraduate students and researchers seeking to expand their understanding of nonparametric statistics, this book will serve as a valuable resource for anyone who wishes to explore the applications of flexible nonparametric techniques.
- Published
- 2024
17. Bayesian Compendium
- Author
-
Marcel van Oijen and Marcel van Oijen
- Subjects
- Statistics, Biometry, Ecology, Environmental monitoring
- Abstract
This book describes how Bayesian methods work. Aiming to demystify the approach, it explains how to parameterize and compare models while accounting for uncertainties in data, model parameters and model structures. Bayesian thinking is not difficult and can be used in virtually every kind of research. How exactly should data be used in modelling? The literature offers a bewildering variety of techniques (Bayesian calibration, data assimilation, Kalman filtering, model-data fusion, …). This book provides a short and easy guide to all these approaches and more. Written from a unifying Bayesian perspective, it reveals how these methods are related to one another. Basic notions from probability theory are introduced and executable R codes for modelling, data analysis and visualization are included to enhance the book's practical use. The codes are also freely available online. This thoroughly revised second edition has separate chapters on risk analysis and decision theory. It also features an expanded text on machine learning with an introduction to natural language processing and calibration of neural networks using various datasets (including the famous iris and MNIST). Literature references have been updated and exercises with solutions have doubled in number.
- Published
- 2024
18. Textbook of Medical Statistics : For Medical Students
- Author
-
Xiuhua Guo, Fuzhong Xue, Xiuhua Guo, and Fuzhong Xue
- Subjects
- Epidemiology, Statistics, Biometry
- Abstract
This book introduces basic concepts, principle, and methods of medical statistics systematically and practically, especially in the statistical design of the experiment in terms of the specific problems, adequate use of statistical methods based on actual data and the reasonable explanation for statistical results.This textbook combines statistical methods with the common application of SPSS software, which is flexible, convenient, and user-friendly; thus, students can focus on the deep understanding of statistics.The authors emphasize the application and generalization of statistical methods, and combine these methods with the modern statistical theory, such as sequential contingency table and multivariate statistical modelling, etc.This book is a useful textbook for graduate and undergraduate students in medical schools, including MBBS (Bachelor of Medicine and Bachelor of Surgery) student.
- Published
- 2024
19. The First Discriminant Theory of Linearly Separable Data : From Exams and Medical Diagnoses with Misclassifications to 169 Microarrays for Cancer Gene Diagnosis
- Author
-
Shuichi Shinmura and Shuichi Shinmura
- Subjects
- Statistics, Biometry, Diagnosis, Cancer—Genetic aspects, Quantitative research, Mathematical optimization
- Abstract
This book deals with the first discriminant theory of linearly separable data (LSD), Theory3, based on the four ordinary LSD of Theory1 and 169 microarrays (LSD) of Theory2. Furthermore, you can quickly analyze the medical data with the misclassified patients which is the true purpose of diagnoses. Author developed RIP (Optimal-linear discriminant function finding the combinatorial optimal solution) as Theory1 in decades ago, that found the minimum misclassifications. RIP discriminated 63 (=26−1) models of Swiss banknote (200•6) and found the minimum LSD: basic gene set (BGS). In Theory2, RIP discriminated Shipp microarray (77•7129) which was LSD and had only 32 nonzero coefficients (first Small Matryoshka; SM1). Because RIP discriminated another 7,097 genes and found SM2, the author developed the Matryoshka feature selection Method 2 (Program 3), that splits microarray into many SMs. Program4 can split microarray into many BGSs. Then, the wide columnLSD (Revolution-0), such as microarray (n
- Published
- 2024
20. Change Point Analysis for Time Series
- Author
-
Lajos Horváth, Gregory Rice, Lajos Horváth, and Gregory Rice
- Subjects
- Mathematical statistics, Time-series analysis, Biometry, Statistics
- Abstract
This volume provides a comprehensive survey that covers various modern methods used for detecting and estimating change points in time series and their models. The book primarily focuses on asymptotic theory and practical applications of change point analysis. The methods discussed in the book go beyond the traditional change point methods for univariate and multivariate series. It also explores techniques for handling heteroscedastic series, high-dimensional series, and functional data. While the primary emphasis is on retrospective change point analysis, the book also presents sequential'on-line'methods for detecting change points in real-time scenarios. Each chapter in the book includes multiple data examples that illustrate the practical application of the developed results. These examples cover diverse fields such as economics, finance, environmental studies, and health data analysis. To reinforce the understanding of the material, each chapter concludes with several exercises.Additionally, the book provides a discussion of background literature, allowing readers to explore further resources for in-depth knowledge on specific topics. Overall,'Change Point Analysis for Time Series'offers a broad and informative overview of modern methods in change point analysis, making it a valuable resource for researchers, practitioners, and students interested in analyzing and modeling time series data.
- Published
- 2024
21. Modeling Correlated Outcomes Using Extensions of Generalized Estimating Equations and Linear Mixed Modeling
- Author
-
George J. Knafl and George J. Knafl
- Subjects
- Statistics, Biometry
- Abstract
This book formulates methods for modeling continuous and categorical correlated outcomes that extend the commonly used methods: generalized estimating equations (GEE) and linear mixed modeling. Partially modified GEE adds estimating equations for variance/dispersion parameters to the standard GEE estimating equations for the mean parameters. Fully modified GEE provides alternate estimating equations for mean parameters as well as estimating equations for variance/dispersion parameters. The new estimating equations in these two cases are generated by maximizing a'likelihood'function related to the multivariate normal density function. Partially modified GEE and fully modified GEE use the standard GEE approach to estimate correlation parameters based on the residuals. Extended linear mixed modeling (ELMM) uses the likelihood function to estimate not only mean and variance/dispersion parameters, but also correlation parameters. Formulations are provided for gradient vectors and Hessianmatrices, for a multi-step algorithm for solving estimating equations, and model-based and robust empirical tests for assessing theory-based models.Standard GEE, partially modified GEE, fully modified GEE, and ELMM are demonstrated and compared using a variety of regression analyses of different types of correlated outcomes. Example analyses of correlated outcomes include linear regression for continuous outcomes, Poisson regression for count/rate outcomes, logistic regression for dichotomous outcomes, exponential regression for positive-valued continuous outcome, multinomial regression for general polytomous outcomes, ordinal regression for ordinal polytomous outcomes, and discrete regression for discrete numeric outcomes. These analyses also address nonlinearity in predictors based on adaptive search through alternative fractional polynomial models controlled by likelihood cross-validation (LCV) scores. Larger LCV scores indicate better models but not necessarilydistinctly better models. LCV ratio tests are used to identify distinctly better models.A SAS macro has been developed for analyzing correlated outcomes using standard GEE, partially modified GEE, fully modified GEE, and ELMM within alternative regression contexts. This macro and code for conducting the analyses addressed in the book are available online via the book's Springer website. Detailed descriptions of how to use this macro and interpret its output are provided in the book.
- Published
- 2024
22. Practical Biostatistics for Medical and Health Sciences
- Author
-
Seyed Hassan Saneii, Hassan Doosti, Seyed Hassan Saneii, and Hassan Doosti
- Subjects
- Statistics, Biometry
- Abstract
This book addresses the challenge of presenting biostatistics to medical and health science audiences coherently. Tailored for students and researchers, its 13 chapters progress logically from foundational concepts like measurement scales and statistical calculations to advanced topics such as probability, correlation, regression and health and disease measures. Practical examples enhance relevance, and its gradual approach ensures easy comprehension even for non-statisticians. The book's practical emphasis shines as it culminates in teaching the use of SPSS software for result interpretation, bridging theory and practice effectively. It empowers medical professionals to confidently understand and apply statistical concepts in their work, serving as an indispensable resource in navigating the intricacies of biostatistics in medical and health sciences.
- Published
- 2024
23. Handbook of Scan Statistics
- Author
-
Joseph Glaz, Markos V. Koutras, Joseph Glaz, and Markos V. Koutras
- Subjects
- Statistics, Bioinformatics, Biometry, Social sciences—Statistical methods
- Abstract
Scan statistics, one of the most active research areas in applied probability and statistics, has seen a tremendous growth during the last 25 years. Google Scholar lists about 3,500 hits to references of articles on scan statistics since the year 2020, resulting in over 850 hits to articles per year. This is mainly due to extensive and diverse areas of science and technology where scan statistics have been employed, including: atmospheric and climate sciences, business, computer science, criminology, ecology, epidemiology, finance, genetics and genomics, geographic sciences, medical and health sciences, nutrition, pharmaceutical sciences, physics, quality control and reliability, social networks and veterinary science. This volume of the Handbook of Scan Statistics is a collection of forty chapters, authored by leading experts in the field, outlines the research and the breadthof applications of scan statistics to the numerous areas of science and technology listed above. These chapters present an overview of the theory, methods and computational techniques, related to research in the area of scan statistics and outline future developments. It contains extensive references to research articles, books and relevant computer software. Handbook of Scan Statistics is an excellent reference for researchers and graduate students in applied probability and statistics, as well as for scientists in research areas where scan statistics are used. This volume may also be used as a textbook for a graduate level course on scan statistics.
- Published
- 2024
24. "Batesonian Mendelism" and "Pearsonian biometry": shedding new light on the controversy between William Bateson and Karl Pearson.
- Author
-
Bertoldi, Nicola
- Subjects
- *
MENDEL'S law , *BIOMETRY , *CROSSBREEDING , *STATISTICS - Abstract
This paper contributes to the ongoing reassessment of the controversy between William Bateson and Karl Pearson by characterising what we call "Batesonian Mendelism" and "Pearsonian biometry" as coherent and competing scientific outlooks. Contrary to the thesis that such a controversy stemmed from diverging theoretical commitments on the nature of heredity and evolution, we argue that Pearson's and Bateson's alternative views on those processes ultimately relied on different appraisals of the methodological value of the statistical apparatus developed by Francis Galton. Accordingly, we contend that Bateson's belief in the primacy of cross-breeding experiments over statistical analysis constituted a minimal methodological unifying condition ensuring the internal coherence of Batesonian Mendelism. Moreover, this same belief implied a view of the study of heredity and evolution as an experimental endeavour and a conception of heredity and evolution as fundamentally discontinuous processes. Similarly, we identify a minimal methodological unifying condition for Pearsonian biometry, which we characterise as the view that experimental methods had to be subordinate to statistical analysis, according to methodological standards set by biometrical research. This other methodological commitment entailed conceiving the study of heredity and evolution as subsumable under biometry and primed Pearson to regard discontinuous hereditary and evolutionary processes as exceptions to a statistical norm. Finally, we conclude that Batesonian Mendelism and Pearsonian biometry represented two potential versions of a single genetics-based evolutionary synthesis since the methodological principles and the phenomena that played a central role in the former were also acknowledged by the latter—albeit as fringe cases—and conversely. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. Are prescription misuse and illicit drug use etiologically distinct? A genetically-informed analysis of opioids and stimulants.
- Author
-
Dash, Genevieve F., Martin, Nicholas G., Agrawal, Arpana, Lynskey, Michael T., and Slutske, Wendy S.
- Subjects
- *
STATISTICS , *SUBSTANCE abuse , *DRUGS , *GENES , *DRUGS of abuse , *BIOMETRY - Abstract
Background: Drug classes are grouped based on their chemical and pharmacological properties, but prescription and illicit drugs differ in other important ways. Potential differences in genetic and environmental influences on the (mis)use of prescription and illicit drugs that are subsumed under the same class should be examined. Opioid and stimulant classes contain prescription and illicit forms differentially associated with salient risk factors (common route of administration, legality), making them useful comparators for addressing this etiological issue. Methods: A total of 2410 individual Australian twins [Mage = 31.77 (s.d. = 2.48); 67% women] were interviewed about prescription misuse and illicit use of opioids and stimulants. Univariate and bivariate biometric models partitioned variances and covariances into additive genetic, shared environmental, and unique environmental influences across drug types. Results: Variation in the propensity to misuse prescription opioids was attributable to genes (41%) and unique environment (59%). Illicit opioid use was attributable to shared (71%) and unique (29%) environment. Prescription stimulant misuse was attributable to genes (79%) and unique environment (21%). Illicit stimulant use was attributable to genes (48%), shared environment (29%), and unique environment (23%). There was evidence for genetic influence common to both stimulant types, but limited evidence for genetic influence common to both opioid types. Bivariate correlations suggested that prescription opioid use may be more genetically similar to prescription stimulant use than to illicit opioid use. Conclusions: Prescription opioid misuse may share little genetic influence with illicit opioid use. Future research may consider avoiding unitary drug classifications, particularly when examining genetic influences. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. Predicative analytics and speech biometrics.
- Author
-
Sidnyaev, N. I., Butenko, Iu. I., and Kiseleva, A. D.
- Subjects
- *
SPEECH perception , *AUTOMATIC speech recognition , *SPEECH , *BIOMETRY , *STATISTICS , *FOREIGN language education - Abstract
The article presents the issues of predictive analytics using different methods and algorithms for forecasting on the basis of statistical data. The relevance of the topic is determined by the necessity of developing practical systems of automatic recognition and understanding of speech signals when studying foreign languages. Issues on the analysis of speech information recognition by means of control systems are considered. Basic features of the device for the speech system recognition which should be considered at creating an algorithm for speech recognition and delivering recommendations on the improvement of pronunciation are described. An algorithm for word and phrase analysis is offered to convert the speech acoustic signal into a chain of symbols and words. The basic principles of modern speech recognition systems are analyzed. Processing of speech phrases using the proposed mathematical algorithms of signal levels will help to determine the degree of influence of individual features on the device. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. Anatomical requirements for dacryocystorhinostomy ostium patency.
- Author
-
Atkova, Eugenia L., Borisenko, Tatiana E., and Yartsev, Vasily D.
- Subjects
- *
COMPUTED tomography , *REOPERATION , *DACRYOCYSTORHINOSTOMY , *STATISTICS , *BIOMETRY - Abstract
Purpose: To identify anatomical factors affecting the outcome of dcryocystorhinostomy (DCR).The study included the results of dacryocystography in 73 patients after DCR: 37 cases of failed DCR and 36 cases of successful DCR. Biometric characteristics of the formed ostium were evaluated: the horizontal size of the bony “window” and the soft tissue part of the ostium, the vertical size of the bony “window” and soft tissue ostium, the height of the fragment of the remaining bone above and below the line of the common canaliculus, and the height of the “pocket” formed below the lower edge of the ostium. Statistical analysis was performed using parametric and non-parametric statistical methods. Differences were considered significant at
p ≤ 0.05.Intergroup differences were identified in the values of the maximum horizontal size of the bony “window” (p = 0.015), the maximum horizontal size of the soft tissue “window” (p < 0.001), the maximum vertical size of the soft tissue “window” (p < 0.001), and the height of the fragment of the remaining bone below the level of the common canaliculus to the edge of the formed ostium (p = 0.004).The stage of forming the bony “window” influences the success of DCR. Not only the position of the “window” is important, but also the geometric properties of the formed ostium.Methods: To identify anatomical factors affecting the outcome of dcryocystorhinostomy (DCR).The study included the results of dacryocystography in 73 patients after DCR: 37 cases of failed DCR and 36 cases of successful DCR. Biometric characteristics of the formed ostium were evaluated: the horizontal size of the bony “window” and the soft tissue part of the ostium, the vertical size of the bony “window” and soft tissue ostium, the height of the fragment of the remaining bone above and below the line of the common canaliculus, and the height of the “pocket” formed below the lower edge of the ostium. Statistical analysis was performed using parametric and non-parametric statistical methods. Differences were considered significant atp ≤ 0.05.Intergroup differences were identified in the values of the maximum horizontal size of the bony “window” (p = 0.015), the maximum horizontal size of the soft tissue “window” (p < 0.001), the maximum vertical size of the soft tissue “window” (p < 0.001), and the height of the fragment of the remaining bone below the level of the common canaliculus to the edge of the formed ostium (p = 0.004).The stage of forming the bony “window” influences the success of DCR. Not only the position of the “window” is important, but also the geometric properties of the formed ostium.Results: To identify anatomical factors affecting the outcome of dcryocystorhinostomy (DCR).The study included the results of dacryocystography in 73 patients after DCR: 37 cases of failed DCR and 36 cases of successful DCR. Biometric characteristics of the formed ostium were evaluated: the horizontal size of the bony “window” and the soft tissue part of the ostium, the vertical size of the bony “window” and soft tissue ostium, the height of the fragment of the remaining bone above and below the line of the common canaliculus, and the height of the “pocket” formed below the lower edge of the ostium. Statistical analysis was performed using parametric and non-parametric statistical methods. Differences were considered significant atp ≤ 0.05.Intergroup differences were identified in the values of the maximum horizontal size of the bony “window” (p = 0.015), the maximum horizontal size of the soft tissue “window” (p < 0.001), the maximum vertical size of the soft tissue “window” (p < 0.001), and the height of the fragment of the remaining bone below the level of the common canaliculus to the edge of the formed ostium (p = 0.004).The stage of forming the bony “window” influences the success of DCR. Not only the position of the “window” is important, but also the geometric properties of the formed ostium.Conclusion: To identify anatomical factors affecting the outcome of dcryocystorhinostomy (DCR).The study included the results of dacryocystography in 73 patients after DCR: 37 cases of failed DCR and 36 cases of successful DCR. Biometric characteristics of the formed ostium were evaluated: the horizontal size of the bony “window” and the soft tissue part of the ostium, the vertical size of the bony “window” and soft tissue ostium, the height of the fragment of the remaining bone above and below the line of the common canaliculus, and the height of the “pocket” formed below the lower edge of the ostium. Statistical analysis was performed using parametric and non-parametric statistical methods. Differences were considered significant atp ≤ 0.05.Intergroup differences were identified in the values of the maximum horizontal size of the bony “window” (p = 0.015), the maximum horizontal size of the soft tissue “window” (p < 0.001), the maximum vertical size of the soft tissue “window” (p < 0.001), and the height of the fragment of the remaining bone below the level of the common canaliculus to the edge of the formed ostium (p = 0.004).The stage of forming the bony “window” influences the success of DCR. Not only the position of the “window” is important, but also the geometric properties of the formed ostium. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
28. Foundations of Applied Statistical Methods
- Author
-
Hang Lee and Hang Lee
- Subjects
- Biometry, Statistics
- Abstract
This book covers methods of applied statistics for researchers who design and conduct experiments, perform statistical inference, and write technical reports. These research activities rely on an adequate knowledge of applied statistics. The reader both builds on basic statistics skills and learns to apply it to applicable scenarios without over-emphasis on the technical aspects. Demonstrations are a very important part of this text. Mathematical expressions are exhibited only if they are defined or intuitively comprehensible.This text may be used as a guidebook for applied researchers or as an introductory statistical methods textbook for students, not majoring in statistics. Discussion includes essential probability models, inference of means, proportions, correlations and regressions, methods for censored survival time data analysis, and sample size determination.
- Published
- 2023
29. Konfidenzintervalle und Standardfehler-Balken : Das Konzept verstehen und Ergebnisse angemessen interpretieren
- Author
-
Irasianty Frost and Irasianty Frost
- Subjects
- Psychology, Psychology—Methodology, Political planning, Law, Statistics, Biometry
- Abstract
Dieses essential zeigt die korrekte Anwendung von Konfidenzintervallen und hilft, Fehlinterpretationen derselben zu vermeiden bzw. zu erkennen. Auf die mathematischen Tiefen der Statistik wird bewusst verzichtet. Leser lernen in diesem essential den Begriff und die Bedeutung des Konfidenzintervalls im Kontext der dahinterstehenden Idee von Jerzy Neyman (1894-1981) kennen. Beispiele und Abbildungen erleichtern das Erfassen des Konzepts Konfidenzintervall.
- Published
- 2023
30. Optimal Experimental Design : A Concise Introduction for Researchers
- Author
-
Jesús López-Fidalgo and Jesús López-Fidalgo
- Subjects
- Experimental design, Statistics, Mathematical statistics—Data processing, Biometry
- Abstract
This textbook provides a concise introduction to optimal experimental design and efficiently prepares the reader for research in the area. It presents the common concepts and techniques for linear and nonlinear models as well as Bayesian optimal designs. The last two chapters are devoted to particular themes of interest, including recent developments and hot topics in optimal experimental design, and real-world applications. Numerous examples and exercises are included, some of them with solutions or hints, as well as references to the existing software for computing designs. The book is primarily intended for graduate students and young researchers in statistics and applied mathematics who are new to the field of optimal experimental design. Given the applications and the way concepts and results are introduced, parts of the text will also appeal to engineers and other applied researchers.
- Published
- 2023
31. Trends and Challenges in Categorical Data Analysis : Statistical Modelling and Interpretation
- Author
-
Maria Kateri, Irini Moustaki, Maria Kateri, and Irini Moustaki
- Subjects
- Statistics, Biometry, Psychometrics, Epidemiology
- Abstract
This book provides a selection of modern and sophisticated methodologies for the analysis of large and complex univariate and multivariate categorical data. It gives an overview of a substantive and broad collection of topics in the analysis of categorical data, including association, marginal and graphical models, time series and fixed effects models, as well as modern methods of estimation such as regularization, Bayesian estimation and bias reduction methods, along with new simple measures for model interpretability. Methodological innovations and developments are illustrated and explained through real-world applications, together with useful R packages, allowing readers to replicate most of the analyses using the provided code. The applications span a variety of disciplines, including education, psychology, health, economics, and social sciences.
- Published
- 2023
32. Statistical Methods and Analyses for Medical Devices
- Author
-
Scott A. Pardo and Scott A. Pardo
- Subjects
- Statistics, Biometry, Mathematical statistics—Data processing
- Abstract
This book provides a reference for people working in the design, development, and manufacturing of medical devices. While there are no statistical methods specifically intended for medical devices, there are methods that are commonly applied to various problems in the design, manufacturing, and quality control of medical devices. The aim of this book is not to turn everyone working in the medical device industries into mathematical statisticians; rather, the goal is to provide some help in thinking statistically, and knowing where to go to answer some fundamental questions, such as justifying a method used to qualify/validate equipment, or what information is necessary to support the choice of sample sizes.While, there are no statistical methods specifically designed for analysis of medical device data, there are some methods that seem to appear regularly in relation to medical devices. For example, the assessment of receiver operating characteristic curves is fundamental to development of diagnostic tests, and accelerated life testing is often critical for assessing the shelf life of medical device products. Another example is sensitivity/specificity computations are necessary for in-vitro diagnostics, and Taguchi methods can be very useful for designing devices. Even notions of equivalence and noninferiority have different interpretations in the medical device field compared to pharmacokinetics. It contains topics such as dynamic modeling, machine learning methods, equivalence testing, and experimental design, for example.This book is for those with no statistical experience, as well as those with statistical knowledgeable—with the hope to provide some insight into what methods are likely to help provide rationale for choices relating to data gathering and analysis activities for medical devices.
- Published
- 2023
33. A threshold-free summary index of prediction accuracy for censored time to event data.
- Author
-
Chow, Eric, Armstrong, Gregory, Yuan, Yan, Zhou, Qian, Li, Bingying, and Cai, Hengrui
- Subjects
censored event time ,positive predictive value ,precision-recall curve ,risk prediction ,screening ,time-dependent prediction accuracy ,Biometry ,Cancer Survivors ,Computer Simulation ,Heart Failure ,Humans ,Predictive Value of Tests ,Risk Assessment ,Risk Factors ,Statistics ,Nonparametric ,Time Factors - Abstract
Prediction performance of a risk scoring system needs to be carefully assessed before its adoption in clinical practice. Clinical preventive care often uses risk scores to screen asymptomatic population. The primary clinical interest is to predict the risk of having an event by a prespecified future time t0 . Accuracy measures such as positive predictive values have been recommended for evaluating the predictive performance. However, for commonly used continuous or ordinal risk score systems, these measures require a subjective cutoff threshold value that dichotomizes the risk scores. The need for a cutoff value created barriers for practitioners and researchers. In this paper, we propose a threshold-free summary index of positive predictive values that accommodates time-dependent event status and competing risks. We develop a nonparametric estimator and provide an inference procedure for comparing this summary measure between 2 risk scores for censored time to event data. We conduct a simulation study to examine the finite-sample performance of the proposed estimation and inference procedures. Lastly, we illustrate the use of this measure on a real data example, comparing 2 risk score systems for predicting heart failure in childhood cancer survivors.
- Published
- 2018
34. A threshold‐free summary index of prediction accuracy for censored time to event data
- Author
-
Yuan, Yan, Zhou, Qian M, Li, Bingying, Cai, Hengrui, Chow, Eric J, and Armstrong, Gregory T
- Subjects
Genetics ,Clinical Research ,Genetic Testing ,Patient Safety ,Prevention ,Bioengineering ,Biometry ,Cancer Survivors ,Computer Simulation ,Heart Failure ,Humans ,Predictive Value of Tests ,Risk Assessment ,Risk Factors ,Statistics ,Nonparametric ,Time Factors ,censored event time ,positive predictive value ,precision-recall curve ,risk prediction ,screening ,time-dependent prediction accuracy ,stat.ME ,Statistics ,Public Health and Health Services ,Statistics & Probability - Abstract
Prediction performance of a risk scoring system needs to be carefully assessed before its adoption in clinical practice. Clinical preventive care often uses risk scores to screen asymptomatic population. The primary clinical interest is to predict the risk of having an event by a prespecified future time t0 . Accuracy measures such as positive predictive values have been recommended for evaluating the predictive performance. However, for commonly used continuous or ordinal risk score systems, these measures require a subjective cutoff threshold value that dichotomizes the risk scores. The need for a cutoff value created barriers for practitioners and researchers. In this paper, we propose a threshold-free summary index of positive predictive values that accommodates time-dependent event status and competing risks. We develop a nonparametric estimator and provide an inference procedure for comparing this summary measure between 2 risk scores for censored time to event data. We conduct a simulation study to examine the finite-sample performance of the proposed estimation and inference procedures. Lastly, we illustrate the use of this measure on a real data example, comparing 2 risk score systems for predicting heart failure in childhood cancer survivors.
- Published
- 2018
35. Induced smoothing for rank‐based regression with recurrent gap time data
- Author
-
Lyu, Tianmeng, Luo, Xianghua, Xu, Gongjun, and Huang, Chiung‐Yu
- Subjects
Mathematical Sciences ,Statistics ,Biometry ,Computer Simulation ,Denmark ,Hospitalization ,Humans ,Mental Disorders ,Patient Readmission ,Recurrence ,Registries ,Regression Analysis ,Time Factors ,accelerated failure time model ,gap times ,Gehan-type weight ,induced smoothing ,recurrent events ,Public Health and Health Services ,Statistics & Probability ,Epidemiology - Abstract
Various semiparametric regression models have recently been proposed for the analysis of gap times between consecutive recurrent events. Among them, the semiparametric accelerated failure time (AFT) model is especially appealing owing to its direct interpretation of covariate effects on the gap times. In general, estimation of the semiparametric AFT model is challenging because the rank-based estimating function is a nonsmooth step function. As a result, solutions to the estimating equations do not necessarily exist. Moreover, the popular resampling-based variance estimation for the AFT model requires solving rank-based estimating equations repeatedly and hence can be computationally cumbersome and unstable. In this paper, we extend the induced smoothing approach to the AFT model for recurrent gap time data. Our proposed smooth estimating function permits the application of standard numerical methods for both the regression coefficients estimation and the standard error estimation. Large-sample properties and an asymptotic variance estimator are provided for the proposed method. Simulation studies show that the proposed method outperforms the existing nonsmooth rank-based estimating function methods in both point estimation and variance estimation. The proposed method is applied to the data analysis of repeated hospitalizations for patients in the Danish Psychiatric Center Register.
- Published
- 2018
36. Semiparametric regression analysis for alternating recurrent event data
- Author
-
Lee, Chi Hyun, Huang, Chiung‐Yu, Xu, Gongjun, and Luo, Xianghua
- Subjects
Mathematical Sciences ,Statistics ,Brain Disorders ,Mental Health ,Health and social care services research ,8.4 Research design and methodologies (health services) ,Adolescent ,Adult ,Aged ,Aged ,80 and over ,Biometry ,Computer Simulation ,Female ,Hospitalization ,Humans ,Italy ,Male ,Mental Disorders ,Middle Aged ,Recurrence ,Registries ,Regression Analysis ,Risk Factors ,Time Factors ,Young Adult ,accelerated failure time model ,alternating renewal process ,gap times ,recurrent events ,Public Health and Health Services ,Statistics & Probability ,Epidemiology - Abstract
Alternating recurrent event data arise frequently in clinical and epidemiologic studies, where 2 types of events such as hospital admission and discharge occur alternately over time. The 2 alternating states defined by these recurrent events could each carry important and distinct information about a patient's underlying health condition and/or the quality of care. In this paper, we propose a semiparametric method for evaluating covariate effects on the 2 alternating states jointly. The proposed methodology accounts for the dependence among the alternating states as well as the heterogeneity across patients via a frailty with unspecified distribution. Moreover, the estimation procedure, which is based on smooth estimating equations, not only properly addresses challenges such as induced dependent censoring and intercept sampling bias commonly confronted in serial event gap time data but also is more computationally tractable than the existing rank-based methods. The proposed methods are evaluated by simulation studies and illustrated by analyzing psychiatric contacts from the South Verona Psychiatric Case Register.
- Published
- 2018
37. Estimating multiple time‐fixed treatment effects using a semi‐Bayes semiparametric marginal structural Cox proportional hazards regression model
- Author
-
Cole, Stephen R, Edwards, Jessie K, Westreich, Daniel, Lesko, Catherine R, Lau, Bryan, Mugavero, Michael J, Mathews, W Christopher, Eron, Joseph J, Greenland, Sander, and Investigators, for the CNICS
- Subjects
Mathematical Sciences ,Statistics ,Anti-HIV Agents ,Bayes Theorem ,Biometry ,HIV Infections ,Humans ,Models ,Statistical ,Proportional Hazards Models ,Regression Analysis ,bias ,causal inference ,cohort study ,semi-Bayes ,semiparametric ,survival analysis ,CNICS Investigators ,Statistics & Probability - Abstract
Marginal structural models for time-fixed treatments fit using inverse-probability weighted estimating equations are increasingly popular. Nonetheless, the resulting effect estimates are subject to finite-sample bias when data are sparse, as is typical for large-sample procedures. Here we propose a semi-Bayes estimation approach which penalizes or shrinks the estimated model parameters to improve finite-sample performance. This approach uses simple symmetric data-augmentation priors. Limited simulation experiments indicate that the proposed approach reduces finite-sample bias and improves confidence-interval coverage when the true values lie within the central "hill" of the prior distribution. We illustrate the approach with data from a nonexperimental study of HIV treatments.
- Published
- 2018
38. Statistical Package for Growth Rates Made Easy.
- Author
-
Mira, Portia, Barlow, Miriam, Meza, Juan, and Hall, Barry
- Subjects
bootstrap ,fitness ,fitness assay ,growth rates ,statistics ,Bacteria ,Biometry ,Colony Count ,Microbial ,Microbial Viability ,Reproducibility of Results ,Software - Abstract
Growth rates are an important tool in microbiology because they provide high throughput fitness measurements. The release of GrowthRates, a program that uses the output of plate reader files to automatically calculate growth rates, has facilitated experimental procedures in many areas. However, many sources of variation within replicate growth rate data exist and can decrease data reliability. We have developed a new statistical package, CompareGrowthRates (CGR), to enhance the program GrowthRates and accurately measure variation in growth rate data sets. We define a metric, Variability-score (V-score), that can help determine if variation within a data set might result in false interpretations. CGR also uses the bootstrap method to determine the fraction of bootstrap replicates in which a strain will grow the fastest. We illustrate the usage of CGR with growth rate data sets similar to those in Mira, Meza, et al. (Adaptive landscapes of resistance genes change as antibiotic concentrations change. Mol Biol Evol. 32(10): 2707-2715). These statistical methods are compatible with the analytic methods described in Growth Rates Made Easy and can be used with any set of growth rate output from GrowthRates.
- Published
- 2017
39. A general statistical framework for subgroup identification and comparative treatment scoring
- Author
-
Chen, Shuai, Tian, Lu, Cai, Tianxi, and Yu, Menggang
- Subjects
Mathematical Sciences ,Statistics ,Health and social care services research ,8.4 Research design and methodologies (health services) ,Biometry ,Humans ,Models ,Statistical ,Precision Medicine ,Treatment Outcome ,A-learning ,Individualized treatment rules ,Observational studies ,Propensity score ,Regularization ,Other Mathematical Sciences ,Statistics & Probability - Abstract
Many statistical methods have recently been developed for identifying subgroups of patients who may benefit from different available treatments. Compared with the traditional outcome-modeling approaches, these methods focus on modeling interactions between the treatments and covariates while by-pass or minimize modeling the main effects of covariates because the subgroup identification only depends on the sign of the interaction. However, these methods are scattered and often narrow in scope. In this article, we propose a general framework, by weighting and A-learning, for subgroup identification in both randomized clinical trials and observational studies. Our framework involves minimum modeling for the relationship between the outcome and covariates pertinent to the subgroup identification. Under the proposed framework, we may also estimate the magnitude of the interaction, which leads to the construction of scoring system measuring the individualized treatment effect. The proposed methods are quite flexible and include many recently proposed estimators as special cases. As a result, some estimators originally proposed for randomized clinical trials can be extended to observational studies, and procedures based on the weighting method can be converted to an A-learning method and vice versa. Our approaches also allow straightforward incorporation of regularization methods for high-dimensional data, as well as possible efficiency augmentation and generalization to multiple treatments. We examine the empirical performance of several procedures belonging to the proposed framework through extensive numerical studies.
- Published
- 2017
40. User-centred multimodal authentication: securing handheld mobile devices using gaze and touch input.
- Author
-
Khamis, Mohamed, Marky, Karola, Bulling, Andreas, and Alt, Florian
- Subjects
- *
USER-centered system design , *PRIVACY , *STATISTICS , *DATA security failures , *EYE movements , *ANALYSIS of variance , *POCKET computers , *USER interfaces , *SECURITY systems , *INTERVIEWING , *SMARTPHONES , *SURVEYS , *FRAUD , *COMPUTER graphics , *QUESTIONNAIRES , *ACCESS to information , *DATA security , *MEDICAL ethics , *REPEATED measures design , *DESCRIPTIVE statistics , *RESEARCH funding , *BIOMETRY , *DATA analysis - Abstract
Handheld mobile devices store a plethora of sensitive data, such as private emails, personal messages, photos, and location data. Authentication is essential to protect access to sensitive data. However, the majority of mobile devices are currently secured by singlemodal authentication schemes which are vulnerable to shoulder surfing, smudge attacks, and thermal attacks. While some authentication schemes protect against one of these attacks, only few schemes address all three of them. We propose multimodal authentication where touch and gaze input are combined to resist shoulder surfing, as well as smudge and thermal attacks. Based on a series of previously published works where we studied the usability of several user-centred multimodal authentication designs and their security against multiple threat models, we provide a comprehensive overview of multimodal authentication on handheld mobile devices. We further present guidelines on how to leverage multiple input modalities for enhancing the usability and security of user authentication on mobile devices. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Fetal Growth Restriction: Comparison of Biometric Parameters.
- Author
-
Marchand, Carolin, Köppe, Jeanette, Köster, Helen Ann, Oelmeier, Kathrin, Schmitz, Ralf, Steinhard, Johannes, Fruscalzo, Arrigo, and Kubiak, Karol
- Subjects
- *
FETAL growth retardation , *FETUS , *BIOMETRIC identification , *SMALL for gestational age , *QUANTILE regression , *BIOMETRY , *FETAL ultrasonic imaging , *STATISTICS - Abstract
The aim of this study was to identify growth-restricted fetuses using biometric parameters and to assess the validity and clinical value of individual ultrasound parameters and ratios, such as transcerebellar diameter/abdominal circumference (TCD/AC), head circumference/abdominal circumference (HC/AC), and femur length/abdominal circumference (FL/AC). In a retrospective single-center cross-sectional study, the biometric data of 9292 pregnancies between the 15th and 42nd weeks of gestation were acquired. Statistical analysis included descriptive data, quantile regression estimating the 10th and 90th percentiles, and multivariable analysis. We obtained clinically noticeable results in predicting small-for-gestational-age (SGA) and fetal growth restriction (FGR) fetuses at advanced weeks of gestation using the AC with a Youden index of 0.81 and 0.96, respectively. The other individual parameters and quotients were less suited to identifying cases of SGA and FGR. The multivariable analysis demonstrated the best results for identifying SGA and FGR fetuses with an area under the curve of 0.95 and 0.96, respectively. The individual ultrasound parameters were better suited to identifying SGA and FGR than the ratios. Amongst these, the AC was the most promising individual parameter, especially at advanced weeks of gestation. However, the highest accuracy was achieved with a multivariable model. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Evaluation of biostatistics knowledge and skills of medical faculty students.
- Author
-
TOMAK, Leman and CİVANBAY, Hasan
- Subjects
- *
BIOMETRY , *TURKS , *STATISTICAL hypothesis testing , *STUDENT attitudes , *TRAINING needs - Abstract
Successful implementation of a scientific study and correct analysis of data obtained is possible with advanced biostatistics knowledge. The aim of this study is to find out efficacy of basic biostatistics program given to medical faculty students and to evaluate students' biostatistics knowledge, attitude and behaviour levels. Medical Faculty students in a Turkish university participated in this study. 123 of the respondents (52.6%) were male and 111 (47.4%) were female, with an average age of 20.2 ± 1.7 years. The survey used included items questioning demographic information, biostatistics knowledge, attitude and behaviours of students and 10 multiple choice questions including the subjects learned during the program. The students filled in this survey before and after training and data obtained were evaluated. Students' positive responses to having biostatistics basic knowledge were 68.0% before training and 95.7% after training. The frequency of knowing the purpose of biostatistics was 81.5% before training and 96.6% after training. While the rate of positive response was 60.9% for population and sample, 63.2% for basic principles in summarizing data, 54.7% for central tendency-location measurements, 51.5% for variability measurements before training, they were found as 95% and higher after training. Positive responses of 70.8% for hypothesis and error types, 48.7% for statistical assumptions, 36.5% for parametric hypothesis tests, 33.0% for nonparametric hypothesis tests and 27.4% for statistical package programs before training were 93.6% and higher after training. Total score obtained from responses to multiple choice questions was 2.5±1.4 before training and 7.5±2.1 after training, which was statistically significant (p<0.001). In this study, biostatistics knowledge, attitudes and behaviours of medical faculty students were evaluated. Biostatistics training needs changes due to latest developments in information technology. Many medical faculties currently teach basic biostatistics concepts and carry out biostatistics training studies to allow critical evaluation during the process. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Kontinuierliche Messgrößen und Stichprobenstrategien in Raum und Zeit : mit Anwendungen in den Natur-, Umwelt-, Wirtschafts- und Finanzwissenschaften
- Author
-
Hartmut Hebbel, Detlef Steuer, Hartmut Hebbel, and Detlef Steuer
- Subjects
- Sampling (Statistics), Statistics, Biometry
- Abstract
Dieses Buch stellt eine aktuelle Auswahl mathematisch-statistischer Methoden und Stichprobenstrategien zum Umgang mit kontinuierlichen Messgrößen in Raum und Zeit dar. Es unterstützt beispielsweise bei der Erfassung, Darstellung, Beurteilung und statistischen Auswertung von Messsignalen sowie bei der Entwicklung und Ausgestaltung von statistischen Analyseverfahren und geeigneten Stichprobenplänen. Die Buchinhalte sind insbesondere zur Anwendung in den Natur- und Umweltwissenschaften geeignet, da dort kontinuierliche Messgrößen in Raum und Zeit besonders häufig auftreten, die damit verbundenen kontinuierlichen Prozesse aber meist nur stichprobenhaft an einigen Stellen bzw. Zeitpunkten beobachtet werden können und zudem oftmals mit Fehlern behaftet sind. Spezielle Kapitel (z.B. Komponentenmodelle) sind auch für Wirtschafts- und Finanzwissenschaftler (Chartanalyse) von Interesse.
- Published
- 2022
44. Recent Developments in Statistics and Data Science : SPE2021, Évora, Portugal, October 13–16
- Author
-
Regina Bispo, Lígia Henriques-Rodrigues, Russell Alpizar-Jara, Miguel de Carvalho, Regina Bispo, Lígia Henriques-Rodrigues, Russell Alpizar-Jara, and Miguel de Carvalho
- Subjects
- Statistics, Quantitative research, Biometry, Social sciences—Statistical methods
- Abstract
This volume presents a collection of twenty-five peer-reviewed articles carefully selected from the contributions presented at the XXV Congress of the Portuguese Statistical Society (2021). Containing state-of-the-art developments in theoretical and applied statistics, the book will be accessible to readers with a background in mathematics and statistics, but will also be of interest to researchers from other scientific disciplines (e.g., biology, economics, medicine), who will find a broad range of relevant applications.
- Published
- 2022
45. Emerging Topics in Modeling Interval-Censored Survival Data
- Author
-
Jianguo Sun, Ding-Geng Chen, Jianguo Sun, and Ding-Geng Chen
- Subjects
- Biometry, Statistics
- Abstract
This book primarily aims to discuss emerging topics in statistical methods and to booster research, education, and training to advance statistical modeling on interval-censored survival data. Commonly collected from public health and biomedical research, among other sources, interval-censored survival data can easily be mistaken for typical right-censored survival data, which can result in erroneous statistical inference due to the complexity of this type of data. The book invites a group of internationally leading researchers to systematically discuss and explore the historical development of the associated methods and their computational implementations, as well as emerging topics related to interval-censored data. It covers a variety of topics, including univariate interval-censored data, multivariate interval-censored data, clustered interval-censored data, competing risk interval-censored data, data with interval-censored covariates, interval-censored data from electric medical records, and misclassified interval-censored data. Researchers, students, and practitioners can directly make use of the state-of-the-art methods covered in the book to tackle their problems in research, education, training and consultation.
- Published
- 2022
46. Health and Vital Statistics
- Author
-
Bernard Benjamin and Bernard Benjamin
- Subjects
- Biometry, Medical statistics, Statistics
- Abstract
Originally published in 1968, this book was intended to help those in health and welfare services as well as those whose policy decisions are influenced by the movement of statistical indices of health, to understand the purpose, derivation and meaning of these indices. It teaches by presenting statistical problems as they are encountered in practice against the background of day-to-day administrative procedures to which they relate. Special attention is paid to practices in the USA and to considerations of international comparability.
- Published
- 2022
47. SIR - Model Supported by a New Density : Action Document for an Adapted COVID - Management
- Author
-
Marcus Hellwig and Marcus Hellwig
- Subjects
- Statistics, Public health, Biometry, Probabilities, Mathematical statistics, Virology
- Abstract
The SIR - model supported by a new density and its derivatives receive a statistical data background from frequency distributions, from whose parameter values over the new density distribution a quality-oriented probability of the respective infection process and its future can be concluded. Thus the COVID - management receives a functionally model basis for the preventive control of the components time planning, cost development, quality management and personnel and material employment.
- Published
- 2022
48. SIR - Modell durch eine neue Dichte unterstützt : Handlungsdokument für ein angepasstes COVID – Management
- Author
-
Marcus Hellwig and Marcus Hellwig
- Subjects
- Statistics, Public health, Biometry, Probabilities, Mathematical statistics, Virology
- Abstract
Das durch eine neue Dichte unterstützte SIR – Modell und dessen Derivate erhalten einen statistischen Datenhintergrund aus Häufigkeitsverteilungen, aus deren Parameterwerten über die neue Dichteverteilung auf eine qualitätsorientierte Wahrscheinlichkeit des jeweiligen Infektionsprozesses und seiner Zukunft geschlossen werden kann. Dadurch erhält das COVID - Management eine funktionsgemäße modellhafte Grundlage zur vorbeugenden Steuerung der Komponenten Zeitplanung, Kostenentwicklung, Qualitätsmanagement und Personal- und Materialeinsatz.
- Published
- 2022
49. Linear Models and Design
- Author
-
Jay H. Beder and Jay H. Beder
- Subjects
- Statistics, Experimental design, Biometry
- Abstract
This book is designed as a textbook for graduate students and as a resource for researchers seeking a thorough mathematical treatment of its subject. It develops the main results of regression and the analysis of variance, as well as the central results on confounded and fractional factorial experiments. Matrix theory is deemphasized; its role is taken instead by the theory of linear transformations between vector spaces.The text gives a carefully paced and unified presentation of two topics, linear models and experimental design. Students are assumed to have a solid background in linear algebra, basic knowledge of regression and analysis of variance, and some exposure to experimental design, and should be comfortable with reading and constructing mathematical proofs.The book leads students into the mathematical theory, including many examples both for motivation and for illustration. Over 130 exercises of varying difficulty are included. An extensive mathematical appendix and a detailed index make the text especially accessible.Linear Models and Design can serve as a textbook for a year-long course in the topics covered, or for a one-semester course in either linear model theory or experimental design. It prepares students for more advanced topics in the field, and assists in developing a thoughtful approach to the existing literature. It includes a guide to terminology as well as discussion of the history and development of ideas, and offers a fresh perspective on the fundamental concepts and results of the subject.
- Published
- 2022
50. Multiple Comparisons for Bernoulli Data
- Author
-
Taka-aki Shiraishi and Taka-aki Shiraishi
- Subjects
- Statistics, Biometry
- Abstract
This book focuses on multiple comparisons of proportions in multi-sample models with Bernoulli responses. First, the author explains the one-sample and two-sample methods that form the basis of multiple comparisons. Then, regularity conditions are stated in detail. Simultaneous inference for all proportions based on exact confidence limits and based on asymptotic theory is discussed. Closed testing procedures based on some one-sample statistics are introduced. For all-pairwise multiple comparisons of proportions, the author uses arcsine square root transformation of sample means. Closed testing procedures based on maximum absolute values of some two-sample test statistics and based on chi-square test statistics are introduced. It is shown that the multi-step procedures are more powerful than single-step procedures and the Ryan–Einot–Gabriel–Welsch (REGW)-type tests. Furthermore, the author discusses multiple comparisons with a control. Under simple ordered restrictions of proportions, the author also discusses closed testing procedures based on maximum values of two-sample test statistics and based on Bartholomew's statistics. Last, serial gatekeeping procedures based on the above-mentioned closed testing procedures are proposed although Bonferroni inequalities are used in serial gatekeeping procedures of many.
- Published
- 2022
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.