8,768 results on '"MEASUREMENT ERROR"'
Search Results
2. To recommend or not recommend is the question: Does NPS predict word-of-mouth?
- Author
-
Schlosser, Ann
- Subjects
SATISFACTION ,CONSUMERS' reviews ,CONSUMERS ,ONLINE shopping ,MEASUREMENT errors - Abstract
The Net Promotor Score (NPS) is ubiquitous, relying on a single-item question to capture consumers' word-of-mouth (WOM). The question asks consumers for their likelihood of recommending a brand to friends and colleagues. Despite its popularity and advantages over longer satisfaction surveys, NPS has potential weaknesses. Among them are that the NPS question (1) is double-barreled by asking in a single question for likelihood to recommend to friends and likelihood to recommend to colleagues, (2) focuses on recommendations, and thus, ignores consumers' likelihood to spread negative WOM, and (3) ignores online WOM, which often involves recommendations to strangers rather than friends or colleagues. This paper empirically tests these three potential weaknesses of the NPS measure on the WOM conclusions derived from NPS. Specifically, three experiments vary whether NPS assesses likelihood to recommend to a friend and colleague in a single question (how NPS is currently measured) or in two separate questions. In addition, NPS is compared to responses to an explicit negative WOM question (intent to warn others about the brand). Moreover, across studies, the NPS is reported for a recent positive experience and either a recent negative experience or a recent mixed experience. NPS is also compared to likelihood to engage in online WOM in terms of posting an online review and the intended online rating. By examining these issues, this research sheds light on consumers' interpretations of NPS, the factors that influence these interpretations, and how these factors affect NPS' ability to predict negative WOM, online WOM, as well as satisfaction, loyal behavior, and WOM in general. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Psychometric assessment of the Runyankole-translated Marlowe-Crowne Social Desirability Scale among persons with HIV in Uganda.
- Author
-
Espinosa da Silva, Cristina, Fatch, Robin, Emenyonu, Nneka, Muyindike, Winnie, Adong, Julian, Rao, Sowmya, Chamie, Gabriel, Ngabirano, Christine, Tumwegamire, Adah, Kekibiina, Allen, Marson, Kara, Beesiga, Brian, Sanyu, Naomi, Katusiime, Anita, and Hahn, Judith
- Subjects
Internal consistency ,Measurement error ,Reliability ,Validity ,Humans ,Psychometrics ,Male ,Female ,Social Desirability ,HIV Infections ,Adult ,Uganda ,Middle Aged ,Self Report ,Alcohol Drinking ,Reproducibility of Results ,Surveys and Questionnaires - Abstract
BACKGROUND: Social desirability can negatively affect the validity of self-reported measures, including underreporting of stigmatized behaviors like alcohol consumption. The Marlowe-Crowne Social Desirability Scale (SDS) is widely implemented and comprised of Denial and Attribution Domains (i.e., tendencies to deny undesirable traits or attribute socially desirable traits to oneself, respectively). Yet, limited psychometric research has been conducted in sub-Saharan Africa, where the prevalence of unhealthy alcohol consumption is high as well as religiosity and hierarchical social norms. To address this gap, we (a) conducted an exploratory study assessing certain psychometric properties of the 28-item SDS (Runyankole-translated) among persons with HIV (PWH) in Uganda, and (b) examined the relationship between social desirability and self-reported alcohol use. METHODS: We pooled baseline data (N = 1153) from three studies of PWH engaged in alcohol use from 2017 to 2021. We assessed the translated scales construct validity (via confirmatory factor analysis), internal consistency, item performance, differential item functioning by gender, concurrent validity with the DUREL religiosity index domains, and the association between social desirability and self-reported alcohol use. RESULTS: Participants had a mean age of 40.42 years, 63% were men, and 91% had an undetectable HIV viral load. The 28-item SDS had satisfactory construct validity (Model fit indices: RMSEA = 0.07, CFI = 0.84, TLI = 0.82) and internal consistency (Denial Domain ΩTotal = 0.82, Attribution Domain ΩTotal = 0.69). We excluded Item 14 (I never hesitate to help someone in trouble) from the Attribution Domain, which mitigated differential measurement error by gender and slightly improved the construct validity (Model fit indices: RMSEA = 0.06, CFI = 0.86, TLI = 0.85) and reliability (Attribution Domain ΩTotal = 0.72) of the 27-item modified SDS. Using the 27-item SDS, we found that social desirability was weakly correlated with religiosity and inversely associated with self-reported alcohol use after adjusting for biomarker-measured alcohol use and other confounders (β = -0.05, 95% confidence interval: -0.09 to -0.01, p-value = 0.03). CONCLUSIONS: We detected and mitigated measurement error in the 28-item Runyankole-translated SDS, and found that the modified 27-item scale had satisfactory construct validity and internal consistency in our sample. Future studies should continue to evaluate the psychometric properties of the Runyankole-translated SDS, including retranslating Item 14 and reevaluating its performance.
- Published
- 2024
4. Mitochondrial Oxygen Measurement Variability in Critically Ill Patients (INOX Variability Study)
- Author
-
Leiden University Medical Center and J.G. van der Bom, Professor J.G. van der Bom
- Published
- 2024
5. Interviewer biases in medical survey data: The example of blood pressure measurements.
- Author
-
Geldsetzer, Pascal, Chang, Andrew, Meijer, Erik, Sudharsanan, Nikkil, Charu, Vivek, Kramlinger, Peter, and Haarburger, Richard
- Subjects
blood pressure ,health survey ,hypertension ,interviewer effects ,measurement error - Abstract
Health agencies rely upon survey-based physical measures to estimate the prevalence of key global health indicators such as hypertension. Such measures are usually collected by nonhealthcare worker personnel and are potentially subject to measurement error due to variations in interviewer technique and setting, termed interviewer effects. In the context of physical measurements, particularly in low- and middle-income countries, interviewer-induced biases have not yet been examined. Using blood pressure as a case study, we aimed to determine the relative contribution of interviewer effects on the total variance of blood pressure measurements in three large nationally representative health surveys from the Global South. We utilized 169,681 observations between 2008 and 2019 from three health surveys (Indonesia Family Life Survey, National Income Dynamics Study of South Africa, and Longitudinal Aging Study in India). In a linear mixed model, we modeled systolic blood pressure as a continuous dependent variable and interviewer effects as random effects alongside individual factors as covariates. To quantify the interviewer effect-induced uncertainty in hypertension prevalence, we utilized a bootstrap approach comparing subsamples of observed blood pressure measurements to their adjusted counterparts. Our analysis revealed that the proportion of variation contributed by interviewers to blood pressure measurements was statistically significant but small: ∼0.24--2.2% depending on the cohort. Thus, hypertension prevalence estimates were not substantially impacted at national scales. However, individual extreme interviewers could account for measurement divergences as high as 12%. Thus, highly biased interviewers could have important impacts on hypertension estimates at the subdistrict level.
- Published
- 2024
6. Does survey mode matter? Comparing in-person and phone agricultural surveys in India.
- Author
-
Anderson, Ellen, Singh, Rupika, Stein, Daniel, Lybbert, Travis, and Shenoy, Ashish
- Subjects
Agriculture ,Data collection ,Measurement error ,Phone survey ,Survey mode - Abstract
Ubiquitous mobile phone ownership makes phone surveying an attractive method of low-cost data collection. We explore differences between in-person and phone survey measures of agricultural production collected for an impact evaluation in India. Phone responses have greater mean and variance, a difference that persists even within a subset of respondents that answered the same question over both modes. Treatment effect estimation remains stable across survey mode, but estimates are less precise when using phone data. These patterns are informative for cost and sample size considerations in study design and for aggregating evidence across study sites or time periods.
- Published
- 2024
7. A Bayesian semi‐parametric scalar‐on‐function regression with measurement error using instrumental variables.
- Author
-
Zoh, Roger S, Luan, Yuanyuan, Xue, Lan, Allison, David B, and Tekwe, Carmen D
- Subjects
- *
MEASUREMENT errors , *GENERALIZED method of moments , *BODY mass index , *PHYSICAL activity , *NATIONAL competency-based educational tests - Abstract
Wearable devices such as the ActiGraph are now commonly used in research to monitor or track physical activity. This trend corresponds with the growing need to assess the relationships between physical activity and health outcomes, such as obesity, accurately. Device‐based physical activity measures are best treated as functions when assessing their associations with scalar‐valued outcomes such as body mass index. Scalar‐on‐function regression (SoFR) is a suitable regression model in this setting. Most estimation approaches in SoFR assume that the measurement error in functional covariates is white noise. Violating this assumption can lead to underestimating model parameters. There are limited approaches to correcting measurement errors for frequentist methods and none for Bayesian methods in this area. We present a non‐parametric Bayesian measurement error‐corrected SoFR model that relaxes all the constraining assumptions often involved with these models. Our estimation relies on an instrumental variable allowing a time‐varying biasing factor, a significant departure from the current generalized method of moment (GMM) approach. Our proposed method also permits model‐based grouping of the functional covariate following measurement error correction. This grouping of the measurement error‐corrected functional covariate allows additional ease of interpretation of how the different groups differ. Our method is easy to implement, and we demonstrate its finite sample properties in extensive simulations. Finally, we applied our method to data from the National Health and Examination Survey to assess the relationship between wearable device‐based measures of physical activity and body mass index in adults in the United States. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Addressing dispersion in mis‐measured multivariate binomial outcomes: A novel statistical approach for detecting differentially methylated regions in bisulfite sequencing data.
- Author
-
Zhao, Kaiqiong, Oualkacha, Karim, Zeng, Yixiao, Shen, Cathy, Klein, Kathleen, Lakhal‐Chaieb, Lajmi, Labbe, Aurélie, Pastinen, Tomi, Hudson, Marie, Colmegna, Inés, Bernatsky, Sasha, and Greenwood, Celia M. T.
- Subjects
- *
EXPECTATION-maximization algorithms , *DNA methylation , *MEASUREMENT errors , *CELL communication , *RHEUMATOID arthritis - Abstract
Motivated by a DNA methylation application, this article addresses the problem of fitting and inferring a multivariate binomial regression model for outcomes that are contaminated by errors and exhibit extra‐parametric variations, also known as dispersion. While dispersion in univariate binomial regression has been extensively studied, addressing dispersion in the context of multivariate outcomes remains a complex and relatively unexplored task. The complexity arises from a noteworthy data characteristic observed in our motivating dataset: non‐constant yet correlated dispersion across outcomes. To address this challenge and account for possible measurement error, we propose a novel hierarchical quasi‐binomial varying coefficient mixed model, which enables flexible dispersion patterns through a combination of additive and multiplicative dispersion components. To maximize the Laplace‐approximated quasi‐likelihood of our model, we further develop a specialized two‐stage expectation‐maximization (EM) algorithm, where a plug‐in estimate for the multiplicative scale parameter enhances the speed and stability of the EM iterations. Simulations demonstrated that our approach yields accurate inference for smooth covariate effects and exhibits excellent power in detecting non‐zero effects. Additionally, we applied our proposed method to investigate the association between DNA methylation, measured across the genome through targeted custom capture sequencing of whole blood, and levels of anti‐citrullinated protein antibodies (ACPA), a preclinical marker for rheumatoid arthritis (RA) risk. Our analysis revealed 23 significant genes that potentially contribute to ACPA‐related differential methylation, highlighting the relevance of cell signaling and collagen metabolism in RA. We implemented our method in the R Bioconductor package called "SOMNiBUS." [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Handling missing data and measurement error for early-onset myopia risk prediction models.
- Author
-
Lai, Hongyu, Gao, Kaiye, Li, Meiyan, Li, Tao, Zhou, Xiaodong, Zhou, Xingtao, Guo, Hui, and Fu, Bo
- Subjects
- *
RECEIVER operating characteristic curves , *MISSING data (Statistics) , *STATISTICAL models , *MEASUREMENT errors , *DECISION trees - Abstract
Background: Early identification of children at high risk of developing myopia is essential to prevent myopia progression by introducing timely interventions. However, missing data and measurement error (ME) are common challenges in risk prediction modelling that can introduce bias in myopia prediction. Methods: We explore four imputation methods to address missing data and ME: single imputation (SI), multiple imputation under missing at random (MI-MAR), multiple imputation with calibration procedure (MI-ME), and multiple imputation under missing not at random (MI-MNAR). We compare four machine-learning models (Decision Tree, Naive Bayes, Random Forest, and Xgboost) and three statistical models (logistic regression, stepwise logistic regression, and least absolute shrinkage and selection operator logistic regression) in myopia risk prediction. We apply these models to the Shanghai Jinshan Myopia Cohort Study and also conduct a simulation study to investigate the impact of missing mechanisms, the degree of ME, and the importance of predictors on model performance. Model performance is evaluated using the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC). Results: Our findings indicate that in scenarios with missing data and ME, using MI-ME in combination with logistic regression yields the best prediction results. In scenarios without ME, employing MI-MAR to handle missing data outperforms SI regardless of the missing mechanisms. When ME has a greater impact on prediction than missing data, the relative advantage of MI-MAR diminishes, and MI-ME becomes more superior. Furthermore, our results demonstrate that statistical models exhibit better prediction performance than machine-learning models. Conclusion: MI-ME emerges as a reliable method for handling missing data and ME in important predictors for early-onset myopia risk prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. An Adaptive Eccentricity Correction Method for Arrayed Single‐Axis TMR Current Sensors.
- Author
-
Li, Shenwang, Chen, Junkuan, Su, Qiuren, Zeng, Guangyu, Liu, Li, Shi, Wusheng, and Wu, Thomas
- Subjects
- *
CONVOLUTIONAL neural networks , *MEASUREMENT errors , *CORRECTION factors , *SEARCH algorithms , *ELECTRIC lines - Abstract
Current sensors based on the tunneling magnetoresistive effect (TMR) are widely used for current measurement due to their high sensitivity, small size, and low power consumption. This paper proposes an effective error correction model to rectify the eccentricity of the transmission line, which can cause a significant measurement error in the ring‐array single‐axis TMR sensor. The model employs a convolutional neural network (CNN) to identify the relationship between the conductor eccentricity and the output of three sensors. The resulting correction factor is then fed back to eliminate the error associated with wire eccentricity. Concurrently, the Sparrow search algorithm (SSA) is employed to optimize the hyperparameters of the convolutional neural network (CNN) in order to enhance the model's performance. The experimental results demonstrate that the maximum error of the ring‐array single‐axis TMR current sensor, corrected by SSA‐CNN, is less than 0.42%, which markedly enhances the precision of the measurement. © 2024 Institute of Electrical Engineers of Japan and Wiley Periodicals LLC. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Influence of lossy JPEG compression on measurement uncertainty for luminance-based area measurements.
- Author
-
Rohweder, Niels-Ole, Raimund, Lisa Alena, and Rembe, Christian
- Subjects
VIDEO codecs ,AREA measurement ,QUALITY factor ,JPEG (Image coding standard) ,STANDARD deviations - Abstract
Copyright of Technisches Messen is the property of De Gruyter and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
12. Interviewer Effects on the Measurement of Physical Performance in a Cross-National Biosocial Survey.
- Author
-
Waldmann, Sophia, Sakshaug, Joseph W, and Cernat, Alexandru
- Subjects
- *
MEASUREMENT errors , *COLLECTING of accounts , *RETIREMENT age , *INTRACLASS correlation , *PHYSICAL mobility - Abstract
Biosocial surveys increasingly use interviewers to collect objective physical health measures (or "biomeasures") in respondents' homes. While interviewers play an important role, their high involvement can lead to unintended interviewer effects on the collected measurements. Such interviewer effects add uncertainty to population estimates and have the potential to lead to erroneous inferences. This study examines interviewer effects on the measurement of physical performance in a cross-national and longitudinal setting using data from the Survey of Health, Ageing and Retirement in Europe. The analyzed biomeasures exhibited moderate-to-large interviewer effects on the measurements, which varied across biomeasure types and across countries. Our findings demonstrate the necessity to better understand the origin of interviewer-related measurement errors in biomeasure collection and account for these errors in statistical analyses of biomeasure data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Psychometric properties of wearable technologies to assess post-stroke gait parameters: A systematic review.
- Author
-
Silva, Raiff Simplicio da, Silva, Stephano Tomaz da, Cardoso, Daiane Carla Rodrigues, Quirino, Maria Amanda Ferreira, Silva, Maria Heloiza Araújo, Gomes, Larissa Araujo, Fernandes, Jefferson Doolan, Oliveira, Raul Alexandre Nunes da Silva, Fernandes, Aline Braga Galvão Silveira, and Ribeiro, Tatiana Souza
- Subjects
- *
PSYCHOMETRICS , *WEARABLE technology , *STROKE patients , *DETECTORS , *PHYSICAL therapy - Abstract
Wearable technologies using inertial sensors are an alternative for gait assessment. However, their psychometric properties in evaluating post-stroke patients are still being determined. This systematic review aimed to evaluate the psychometric properties of wearable technologies used to assess post-stroke gait and analyze their reliability and measurement error. The review also investigated which wearable technologies have been used to assess angular changes in post-stroke gait. The present review included studies in English with no publication date restrictions that evaluated the psychometric properties (e.g., validity, reliability, responsiveness, and measurement error) of wearable technologies used to assess post-stroke gait. Searches were conducted from February to March 2023 in the following databases: Cochrane Central Registry of Controlled Trials (CENTRAL), Medline/PubMed, EMBASE Ovid, CINAHL EBSCO, PsycINFO Ovid, IEEE Xplore Digital Library (IEEE), and Physiotherapy Evidence Database (PEDro); the gray literature was also verified. The Consensus-based Standards for the Selection of Health Measurement Instruments (COSMIN) risk-of-bias tool was used to assess the quality of the studies that analyzed reliability and measurement error. Forty-two studies investigating validity (37 studies), reliability (16 studies), and measurement error (6 studies) of wearable technologies were included. Devices presented good reliability in measuring gait speed and step count; however, the quality of the evidence supporting this was low. The evidence of measurement error in step counts was indeterminate. Moreover, only two studies obtained angular results using wearable technology. Wearable technologies have demonstrated reliability in analyzing gait parameters (gait speed and step count) among post-stroke patients. However, higher-quality studies should be conducted to improve the quality of evidence and to address the measurement error assessment. Also, few studies used wearable technology to analyze angular changes during post-stroke gait. • Validity, reliability and measurement error of the devices have been investigated. • Devices presented good reliability in measuring gait parameters post-stroke. • Evidence on reliability is sufficient but of low quality. • Evidence on measurement error is sufficient but of low quality. • Few studies use portable devices to analyze angular changes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. The impacts of thermocouple insulation failure on the accuracy of temperature measurement data in forensic fire‐death scenarios—Part II: Low electrical resistance and contamination.
- Author
-
Silveira, David, Kendell, Ashley, and Shook, Beth
- Subjects
- *
FIRE exposure , *MEASUREMENT errors , *TEMPERATURE measurements , *WOOD , *CERAMICS - Abstract
Part II of this two‐part article investigates the impact of thermocouple insulation failure on temperature measurement data in forensic fire‐death scenarios. Two different models of glass fiber‐insulated thermocouple wires (GG‐K‐24‐SLE and HH‐K‐24 from Omega Engineering) were passed through a ceramic kiln at temperatures up to 1093°C to measure an ice bath at a constant 0°C. In a separate experiment, the same two models of thermocouple wire plus a BLMI‐XL‐K‐18U‐120 mineral‐insulated metal‐sheathed thermocouple probe were passed through a wood pallet fire to measure an ice bath. In the ceramic kiln, the effect on measurement errors was determined for short vs. long exposure lengths and clean insulation vs. insulation contaminated with pork fat. Glass fiber‐insulated thermocouple wires showed severe failure in both experiments, with errors ranging from −270°C to almost 2200°C. The metal‐sheathed probe showed no evidence of insulation failure and continued to accurately measure the ice bath temperature within expected margins of error around 0°C. This study highlights how exposure of inadequate thermocouples to fire‐level temperatures produces severe errors in temperature data. Consequently, it will not be possible to use this data to draw any accurate conclusions about the effects of fire exposure to human donors or animal proxies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. The impacts of thermocouple insulation failure on the accuracy of temperature measurement data in forensic fire‐death scenarios—Part I: Physical disintegration.
- Author
-
Silveira, David, Kendell, Ashley, and Shook, Beth
- Subjects
- *
FIRE exposure , *MEASUREMENT errors , *THERMOCOUPLES , *ELECTRIC insulators & insulation , *TEMPERATURE measurements - Abstract
Thermocouples are utilized to monitor a wide range of temperatures in industrial applications. They are also used in both fire and forensic science research to measure temperatures of fires and of materials exposed to fire. Taking accurate temperature measurements during forensic fire‐death scenarios is very difficult due to direct fire exposure to thermocouples, shrinkage and destruction of tissues, and movements from pyre collapse and pugilistic posturing of human donors. This two‐part study investigates the impacts on the accuracy of temperature data if the selected thermocouples are unable to withstand fire exposure. Part I (this article) provides an overview of thermocouple theory along with evidence of the physical deterioration that occurs when glass fiber‐insulated thermocouple wires are overheated by exposure to fire‐level temperatures in a muffle furnace. This study verified that insulation overheating causes embrittlement and disintegration, which can cause the indicated temperature to reflect a new location of measurement located far away from the original measuring junction at the thermocouple tip. Part II will discuss the measurement errors that occurred due to low electrical resistance of insulation when three different thermocouple models were passed through fire‐level temperatures to measure an ice bath at a constant temperature of 0°C. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Diagnostics for partially linear measurement error models.
- Author
-
Emami, Hadi
- Subjects
- *
ERRORS-in-variables models , *MEASUREMENT errors , *LENGTH measurement - Abstract
Partially linear models are useful tools to analyze data from economic, genetic, and other fields. Similar to other data analyses, the identification of influential observations that may be potential outliers is an important step beyond estimation in such models. The objective of this article is to develop some diagnostic measures for identifying influential observations in partially linear models when some of the covariates are measured with errors. Deletion measures are developed based on case deletion, mean shift outlier models, and the corrected likelihood of Nakamura (1990). The performance of the methods is illustrated by an artificial example and a real example. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Effect of Speckle Edge Characteristics on DIC Calculation Error.
- Author
-
Cui, H., Zeng, Z., Zhang, H., and Yang, F.
- Subjects
- *
SPECKLE interferometry , *DIGITAL image correlation , *MEASUREMENT errors , *SPECKLE interference , *ERROR functions - Abstract
Background: In DIC studies, positional parameters and speckle size are commonly used to characterise speckle images. The influence of edge parameters is ignored. This leads to a great difference between the DIC calculation results of simulated and real images. And some contradictory results are also produced. Objective: The main objective of this paper is to investigate the effect of edge parameters. As well as to give more reasonable parameters to describe the speckle characteristics. Methods: Firstly, this paper proposes a series of more reasonable parameters to describe the speckle features based on the mathematical expression of the speckle image. Subsequently, the effect of different edge functions on the computational error of DIC is investigated. The effect of different edge functions on pre-filtering is also investigated. Finally, real speckle images are produced using Gaussian and step functions to study the difference between the simulated and real speckle images. Results: Generally, it is believed that prefiltering can reduce the computational error of DIC, but for Gaussian edges, prefiltering hardly reduces the error, whereas hybrid edges correctly exhibit this phenomenon. Although the Gaussian edge perform well in the simulation, the actual speckle images taken show that the DIC error corresponding to the camera-acquired Gaussian speckle is much larger than that of the step speckle. Conclusions: The introduction of edge parameters to describe speckle images is necessary for DIC studies. Pre-filtering always reduces the DIC error, but for Gaussian edges this property cannot be demonstrated. The most suitable edges in reality are step edges, not Gaussian edges. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Nonparametric estimation of a survival function in the presence of measurement errors on the failure time of interest.
- Author
-
Jin, Shaojia, Liu, Yanyan, Mao, Guangcai, Sun, Jianguo, and Wu, Yuanshan
- Subjects
- *
NONPARAMETRIC estimation , *ELECTRONIC health records , *EXTRAPOLATION , *CHRONIC diseases , *MEASUREMENT errors - Abstract
This article discusses nonparametric estimation of a survival function in the presence of measurement errors on the observation of the failure time of interest. One situation where such issues arise would be clinical studies of chronic diseases where the observation on the time to the failure event of interest such as the onset of the disease relies on patient recall or chart review of electronic medical records. It is easy to see that both situations can be subject to measurement errors. To resolve this problem, we propose a simulation extrapolation approach to correct the bias induced by the measurement error. To overcome potential computational difficulties, we use spline regression to approximate the unspecified extrapolated coefficient function of time, and establish the asymptotic properties of our proposed estimator. The proposed method is applied to nonparametric estimation based on interval‐censored data. Extensive numerical experiments involving both simulated and actual study datasets demonstrate the feasibility of this proposed estimation procedure. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Preoperative cognitive profile predictive of cognitive decline after subthalamic deep brain stimulation in Parkinson's disease.
- Author
-
Mana, Josef, Bezdicek, Ondrej, Růžička, Filip, Lasica, Andrej, Šmídová, Anna, Klempířová, Olga, Nikolai, Tomáš, Uhrová, Tereza, Růžička, Evžen, Urgošík, Dušan, and Jech, Robert
- Subjects
- *
DEEP brain stimulation , *EXECUTIVE function , *PARKINSON'S disease , *ERRORS-in-variables models , *STATISTICAL reliability , *MEASUREMENT errors - Abstract
Cognitive decline represents a severe non‐motor symptom of Parkinson's disease (PD) that can significantly reduce the benefits of subthalamic deep brain stimulation (STN DBS). Here, we aimed to describe post‐surgery cognitive decline and identify pre‐surgery cognitive profile associated with faster decline in STN DBS‐treated PD patients. A retrospective observational study of 126 PD patients treated by STN DBS combined with oral dopaminergic therapy followed for 3.54 years on average (
SD = 2.32) with repeated assessments of cognition was conducted. Pre‐surgery cognitive profile was obtained via a comprehensive neuropsychological examination and data analysed using exploratory factor analysis and Bayesian generalized linear mixed models. On the whole, we observed a mild annual cognitive decline of 0.90 points from a total of 144 points in the Mattis Dementia Rating Scale (95% posterior probability interval [−1.19, −0.62]) with high inter‐individual variability. However, true score changes did not reach previously reported reliable change cut‐offs. Executive deficit was the only pre‐surgery cognitive variable to reliably predict the rate of post‐surgery cognitive decline. On the other hand, exploratory analysis of electrode localization did not yield any statistically clear results. Overall, our data and models imply mild gradual average annual post‐surgery cognitive decline with high inter‐individual variability in STN DBS‐treated PD patients. Nonetheless, patients with worse long‐term cognitive prognosis can be reliably identified via pre‐surgery examination of executive functions. To further increase the utility of our results, we demonstrate how our models can help with disentangling true score changes from measurement error in future studies of post‐surgery cognitive changes. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
20. Conditional score approaches to errors‐in‐variables competing risks data in discrete time.
- Author
-
Wen, Chi‐Chung and Chen, Yi‐Hau
- Subjects
- *
COMPETING risks , *MEASUREMENT errors , *SURVIVAL analysis (Biometry) , *REGRESSION analysis , *RISK assessment - Abstract
Analysis of competing risks data has been an important topic in survival analysis due to the need to account for the dependence among the competing events. Also, event times are often recorded on discrete time scales, rendering the models tailored for discrete‐time nature useful in the practice of survival analysis. In this work, we focus on regression analysis with discrete‐time competing risks data, and consider the errors‐in‐variables issue where the covariates are prone to measurement errors. Viewing the true covariate value as a parameter, we develop the conditional score methods for various discrete‐time competing risks models, including the cause‐specific and subdistribution hazards models that have been popular in competing risks data analysis. The proposed estimators can be implemented by efficient computation algorithms, and the associated large sample theories can be simply obtained. Simulation results show satisfactory finite sample performances, and the application with the competing risks data from the scleroderma lung study reveals the utility of the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Silence is golden, but my measures still see—why cheaper-but-noisier outcome measures in large simple trials can be more cost-effective than gold standards.
- Author
-
Woolf, Benjamin, Pedder, Hugo, Rodriguez-Broadbent, Henry, and Edwards, Phil
- Subjects
- *
MEASUREMENT errors , *RANDOMIZED controlled trials , *SAMPLING errors , *JUDGMENT (Psychology) , *RESEARCH personnel , *SELECTION bias (Statistics) , *PERCENTILES - Abstract
Objective: To assess the cost-effectiveness of using cheaper-but-noisier outcome measures, such as a short questionnaire, for large simple clinical trials. Background: To detect associations reliably, trials must avoid bias and random error. To reduce random error, we can increase the size of the trial and increase the accuracy of the outcome measurement process. However, with fixed resources, there is a trade-off between the number of participants a trial can enrol and the amount of information that can be collected on each participant during data collection. Methods: To consider the effect on measurement error of using outcome scales with varying numbers of categories, we define and calculate the variance from categorisation that would be expected from using a category midpoint; define the analytic conditions under which such a measure is cost-effective; use meta-regression to estimate the impact of participant burden, defined as questionnaire length, on response rates; and develop an interactive web-app to allow researchers to explore the cost-effectiveness of using such a measure under plausible assumptions. Results: An outcome scale with only a few categories greatly reduced the variance of non-measurement. For example, a scale with five categories reduced the variance of non-measurement by 96% for a uniform distribution. We show that a simple measure will be more cost-effective than a gold-standard measure if the relative increase in variance due to using it is less than the relative increase in cost from the gold standard, assuming it does not introduce bias in the measurement. We found an inverse power law relationship between participant burden and response rates such that a doubling the burden on participants reduces the response rate by around one third. Finally, we created an interactive web-app (https://benjiwoolf.shinyapps.io/cheapbutnoisymeasures/) to allow exploration of when using a cheap-but-noisy measure will be more cost-effective using realistic parameters. Conclusion: Cheaper-but-noisier questionnaires containing just a few questions can be a cost-effective way of maximising power. However, their use requires a judgement on the trade-off between the potential increase in risk of information bias and the reduction in the potential of selection bias due to the expected higher response rates. Key messages: A cheaper-but-noisier outcome measure, like a short form questionnaire, is a more cost-effective method of maximising power in large simple clinical trials than an error free gold standard measure when the percentage increase in noise from using the cheaper-but-noisier measure is less than the relative difference in the cost of administering the two measures. We have created an R-shiny app to facilitate the exploration of when this condition is met at https://benjiwoolf.shinyapps.io/cheapbutnoisymeasures/ Cheaper-but-noisier outcome measures are more likely to introduce information bias than a gold standard but may reduce selection bias because they reduce loss-to-follow-up. Researchers therefore need to form a judgement about the relative increase or decrease in bias before using a cheap-but-noisy measure. We encourage the development and validation of short form questionnaires to enable the use of high quality cheaper-but-noisier outcome measures in randomised controlled trials. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. AFFECT: an R package for accelerated functional failure time model with error-contaminated survival times and applications to gene expression data.
- Author
-
Chen, Li-Pang and Huang, Hsiao-Ting
- Subjects
- *
MEASUREMENT errors , *GENE expression , *NONLINEAR functions , *ACQUISITION of data , *SURVIVAL analysis (Biometry) , *BOOSTING algorithms - Abstract
Background: Survival analysis has been used to characterize the time-to-event data. In medical studies, a typical application is to analyze the survival time of specific cancers by using high-dimensional gene expressions. The main challenges include the involvement of non-informaive gene expressions and possibly nonlinear relationship between survival time and gene expressions. Moreover, due to possibly imprecise data collection or wrong record, measurement error might be ubiquitous in the survival time and its censoring status. Ignoring measurement error effects may incur biased estimator and wrong conclusion. Results: To tackle those challenges and derive a reliable estimation with efficiently computational implementation, we develop the R package AFFECT, which is referred to Accelerated Functional Failure time model with Error-Contaminated survival Times. Conclusions: This package aims to correct for measurement error effects in survival times and implements a boosting algorithm under corrected data to determine informative gene expressions as well as derive the corresponding nonlinear functions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. A new general biased estimator in linear measurement error model.
- Author
-
Goyal, Pragya, Tiwari, Manoj K., Bist, Vikas, and Ababneh, Faisal
- Subjects
- *
ERRORS-in-variables models , *MONTE Carlo method , *LENGTH measurement , *MULTICOLLINEARITY , *MEASUREMENT errors , *COMPUTER simulation - Abstract
AbstractNumerous biased estimators are known to circumvent the multicollinearity problem in linear measurement error models. This article proposes a general biased estimator with the ridge regression and the Liu estimators as special cases. The efficiency of the suggested estimator is compared with ridge regression and Liu estimators under the mean squared error matrix criterion. In addition, a Monte Carlo simulation study and a numerical evaluation have been conducted to elucidate the superiority of the new general biased estimator over other estimators. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Performance of function-based AEWMA coefficient of variation control chart with measurement errors.
- Author
-
Arshad, Asma, Noor-ul-Amin, Muhammad, Dogu, Eralp, and Hanif, Muhammad
- Subjects
- *
MONTE Carlo method , *QUALITY control charts , *PARAMETRIC processes , *ADAPTIVE control systems , *MOVING average process - Abstract
A new coefficient of variation (CV) functionally adaptive exponentially weighted moving average control chart is presented by considering the influence of the measurement error (ME) and named the "FAEWMA-ME-CV" control chart. The linear covariate model and the multiple measurements method are used to present the control chart. The designed function is efficient to detect the infrequent process changes in the form of parametric shifts in the process CV. The proficiency of the design is evaluated by analyzing the run-length values which are determined by the Monte Carlo simulation runs shown for the various parametric settings in extensive tables. The analysis depicts that the presence of the ME plays a significant role in the performance of a control chart. Moreover, a real-life dataset explained the application of the proposed FAEWMA-ME-CV chart to support the argument. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Comparing Egocentric and Sociocentric Centrality Measures in Directed Networks.
- Author
-
An, Weihua
- Subjects
- *
MEASUREMENT errors , *SAMPLING errors , *EXPERIMENTAL design , *RECIPROCITY (Psychology) , *DENSITY - Abstract
Egocentric networks represent a popular research design for network research. However, to what extent and under what conditions egocentric network centrality can serve as reasonable substitutes for their sociocentric counterparts are important questions to study. The answers to these questions are uncertain simply because of the large variety of networks. Hence, this paper aims to provide exploratory answers to these questions by analyzing both empirical and simulated data. Through analyses of various empirical networks (including some classic albeit small ones), this paper shows that egocentric betweenness approximates sociocentric betweenness quite well (the correlation is high across almost all the networks being examined) while egocentric closeness approximates sociocentric closeness only reasonably well (the correlation is a bit lower on average with a larger variance across networks). Simulations also confirm this finding. Analyses further show that egocentric approximations of betweenness and closeness seem to work well in different types of networks (as featured by network size, density, centralization, reciprocity, transitivity, and geodistance). Lastly, the paper briefly presents three ideas to help improve egocentric approximations of centrality measures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Measurement error and bias in real-world oncology endpoints when constructing external control arms.
- Author
-
Ackerman, Benjamin, Gan, Ryan W., Meyer, Craig S., Wang, Jocelyn R., Youyi Zhang, Hayden, Jennifer, Mahoney, Grace, Lund, Jennifer L., Weberpals, Janick, Schneeweiss, Sebastian, Roose, James, Siddique, Juned, Nadeem, Omar, Giri, Smith, Stürmer, Til, Ailawadhi, Sikander, Batavia, Ashita S., and Sarsour, Khaled
- Subjects
- *
MEASUREMENT errors , *PROGRESSION-free survival , *MULTIPLE myeloma , *TREATMENT effectiveness , *COMPARATOR circuits - Abstract
Introduction: While randomized controlled trials remain the reference standard for evaluating treatment efficacy, there is an increased interest in the use of external control arms (ECA), namely in oncology, using real-world data (RWD). Challenges related to measurement of real-world oncology endpoints, like progression-free survival (PFS), are one factor limiting the use and acceptance of ECAs as comparators to trial populations. Differences in how and when disease assessments occur in the real-world may introduce measurement error and limit the comparability of real-world PFS (rwPFS) to trial progression-free survival. While measurement error is a known challenge when conducting an externally-controlled trial with real-world data, there is limited literature describing key contributing factors, particularly in the context of multiple myeloma (MM). Methods: We distinguish between biases attributed to how endpoints are derived or ascertained (misclassification bias) and when outcomes are observed or assessed (surveillance bias). We further describe how misclassification of progression events (i.e., false positives, false negatives) and irregular assessment frequencies in multiple myeloma RWD can contribute to these biases, respectively. We conduct a simulation study to illustrate how these biases may behave, both individually and together. Results: We observe in simulation that certain types of measurement error may have more substantial impacts on comparability between mismeasured median PFS (mPFS) and true mPFS than others. For instance, when the observed progression events are misclassified as either false positives or false negatives, mismeasured mPFS may be biased towards earlier (mPFS bias = -6.4 months) or later times (mPFS bias = 13 months), respectively. However, when events are correctly classified but assessment frequencies are irregular, mismeasured mPFS is more similar to the true mPFS (mPFS bias = 0.67 months). Discussion: When misclassified progression events and irregular assessment times occur simultaneously, they may generate bias that is greater than the sum of their parts. Improved understanding of endpoint measurement error and how resulting biases manifest in RWD is important to the robust construction of ECAs in oncology and beyond. Simulations that quantify the impact of measurement error can help when planning for ECA studies and can contextualize results in the presence of endpoint measurement differences. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Matrix-variate generalized linear model with measurement error.
- Author
-
Sun, Tianqi, Li, Weiyu, and Lin, Lu
- Subjects
ERRORS-in-variables models ,LENGTH measurement ,MEASUREMENT errors - Abstract
Matrix-variate generalized linear model (mvGLM) has been investigated successfully under the framework of tensor generalized linear model, because matrix-form data can be regarded as a specific tensor (2-dimension). But there are few works focusing on matrix-form data with measurement error (ME), since tensor in conjunction with ME is relatively complex in structure. In this paper we introduce a mvGLM to primarily explore the influence of ME in the model with matrix-form data. We calculate the asymptotic bias based on error-prone mvGLM, and then develop bias-correction methods to tackle the affect of ME. Statistical properties for all methods are established, and the practical performance of all methods is further evaluated in analysis on synthetic and real data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Scaling and measurement error sensitivity of scoring rules for distribution forecasts.
- Author
-
Kleen, Onno
- Subjects
MEASUREMENT errors ,MARKET volatility ,GROSS domestic product ,FINANCIAL markets ,SCALING (Social sciences) - Abstract
Summary: This paper examines the impact of data rescaling and measurement error on scoring rules for distribution forecast. First, I show that all commonly used scoring rules for distribution forecasts are robust to rescaling the data. Second, the forecast ranking based on the continuous ranked probability score is less sensitive to gross measurement error than the ranking based on the log score. The theoretical results are complemented by a simulation study aligned with frequently revised quarterly US gross domestic product (GDP) growth data, a simulation study aligned with financial market volatility, and an empirical application forecasting realized variances of S&P 100 constituents. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Electronic health record data is unable to effectively characterize measurement error from pulse oximetry: a simulation study.
- Author
-
Sarraf, Elie
- Abstract
Large data sets from electronic health records (EHR) have been used in journal articles to demonstrate race-based imprecision in pulse oximetry (SpO
2 ) measurements. These articles do not appear to recognize the impact of the variability of the SpO2 values with respect to time ("deviation time"). This manuscript seeks to demonstrate that due to this variability, EHR data should not be used to quantify SpO2 error. Using the MIMIC-IV Waveform dataset, SpO2 values are sampled from 198 patients admitted to an intensive care unit and used as reference samples. The error derived from the EHR data is simulated using a set of deviation times. The laboratory oxygen saturation measurements are also simulated such that the performance of three simulated pulse oximeter devices will produce an average root mean squared (ARMS ) error of 2%. An analysis is then undertaken to reproduce a medical device submission to a regulatory body by quantifying the mean error, the standard deviation of the error, and the ARMS error. Bland-Altman plots were also generated with their Limits of Agreements. Each analysis was repeated to evaluate whether the measurement errors were affected by increasing the deviation time. All error values increased linearly with respect to the logarithm of the time deviation. At 10 min, the ARMS error increased from a baseline of 2% to over 4%. EHR data cannot be reliably used to quantify SpO2 error. Caution should be used in interpreting prior manuscripts that rely on EHR data. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
30. Spline linear mixed‐effects models for causal mediation analysis with longitudinal data.
- Author
-
Albert, Jeffrey M., Zhu, Hongxu, Dey, Tanujit, Sun, Jiayang, Woyczynski, Wojbor A., Powers, Gregory, and Min, Meeyoung
- Subjects
- *
MEDIATION (Statistics) , *CAUSAL models , *DATA analysis , *PRENATAL drug exposure , *SPLINES , *PRENATAL exposure - Abstract
Summary Often, causal mediation analysis is of interest when both the mediator and the final outcome are repeatedly measured, but limited work has been done for this situation (as opposed to where only the mediator is repeatedly measured). Available methods are primarily based on parametric models and tend to be sensitive to model assumptions. This article presents semiparametric, continuous‐time models to provide a flexible and robust approach to causal mediation analysis for longitudinal data, which allows these data to be unbalanced or irregular. Specifically, the method uses spline linear mixed‐effects models for the mediator and for the final outcome, with a two‐step approach to model‐fitting in which a predicted mediator is used as a covariate in the final outcome model. The models allow flexible functions for both the mean and individual response functions for each outcome. We derive estimated natural direct and indirect effects as a function of time using an extended mediation formula and sequential ignorability assumption. In simulation studies, we compare properties of estimated direct and indirect effects, and a delta method estimate of the standard error of the latter, under alternative approaches for predicting the mediator. The approach is illustrated using harmonised data from two cohort studies to examine attention as a mediator of the effect of prenatal tobacco exposure on externalising behaviour in children. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Optimal model averaging for partially linear models with missing response variables and error‐prone covariates.
- Author
-
Liang, Zhongqi, Wang, Suojin, and Cai, Li
- Subjects
- *
MEASUREMENT errors - Abstract
We consider the problem of optimal model averaging for partially linear models when the responses are missing at random and some covariates are measured with error. A novel weight choice criterion based on the Mallows‐type criterion is proposed for the weight vector to be used in the model averaging. The resulting model averaging estimator for the partially linear models is shown to be asymptotically optimal under some regularity conditions in terms of achieving the smallest possible squared loss. In addition, the existence of a local minimizing weight vector and its convergence rate to the risk‐based optimal weight vector are established. Simulation studies suggest that the proposed model averaging method generally outperforms existing methods. As an illustration, the proposed method is applied to analyze an HIV‐CD4 dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. A new kernel two-parameter prediction under multicollinearity in partially linear mixed measurement error model.
- Author
-
Yalaz, Seçil and Kuran, Özge
- Subjects
- *
MULTICOLLINEARITY , *ERRORS-in-variables models , *MEASUREMENT errors , *LENGTH measurement , *MONTE Carlo method , *COVARIANCE matrices , *FORECASTING - Abstract
A Partially linear mixed effects model relating a response
Y to predictors $ (X,Z,T) $ (X,Z,T) with the mean function $ X^{T}\beta +Zb+g(T) $ XTβ+Zb+g(T) is considered in this paper. When the parametric parts' variableX are measured with additive error and there is ill-conditioned data suffering from multicollinearity, a new kernel two-parameter prediction method using the kernel ridge and Liu regression approach is suggested. The kernel two parameter estimator ofβ and the predictor ofb are derived by modifying the likelihood and Henderson methods. Matrix mean square error comparisons are calculated. We also demonstrate that under suitable conditions, the resulting estimator ofβ is asymptotically normal. The situation with an unknown measurement error covariance matrix is handled. A Monte Carlo simulation study, together with an earthquake data example, is compiled to evaluate the effectiveness of the proposed approach at the end of the paper. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
33. The reliability and minimal detectable change of common tests and measures for temporomandibular disorders.
- Author
-
Olivencia, Ovidio, Kaplan, Kelby B., Graham, Ashley, Herpich, Nicole, Memmo, Leah, and Kolber, Morey J.
- Subjects
- *
TEMPOROMANDIBULAR disorders , *OROFACIAL pain , *INTRACLASS correlation , *RANGE of motion of joints , *INTER-observer reliability - Abstract
BackgroundPurposeMethodsResultsConclusionTemporomandibular disorders are a source of orofacial pain. Understanding clinimetric properties of evaluation procedures is necessary for assessing impairments and determining response to interventions.Reliability, minimal detectable change (MDC95), and 95% limits of agreement of TMJ examination procedures were investigated.Occlusion (central incisor alignment, overjet, overbite), mandibular dynamics (maximal incisor opening, laterotrusion, protrusion active range of motion (AROM)), auscultation, tenderness, and joint play were measured on 50 asymptomatic adults (30 females), mean age 24.8. The inter-rater reliability assessment used an intra-session design. Participants returned 24–48 h later for intra-rater assessments. Intraclass correlation coefficients (ICC) and Kappa values were used to determine reproducibility.Intra-rater reliability for occlusion and AROM was ICC 3,1 ≥ 0.75, whereas interrater reliability was ICC 2,1 ≥ 0.68. Kappa values for inter-rater agreement of joint mobility was K = .18, whereas auscultation and palpation were K ≥ 0.48. Intra-rater Kappa values were ≥ 0.24, with lateral pterygoid region palpation having poor agreement. The MDC95 for occlusion was 1 mm, whereas AROM ranged from 3 to 6 mm. Mean AROM differences between raters were −1.16, −0.42, −0.18, and −0.8 mm for maximal incisor opening, left and right laterotrusion, and protrusion, respectively.AROM and occlusion measurements may be used with confidence; however, poor agreement for joint mobility measurements and lateral pterygoid region palpation must be recognized. When re-assessing measurements, a 3–6 and 1-mm change in AROM and occlusion, respectively, is required to be 95% certain change is not due to error. Future symptomatic population research is needed (250/250). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Estimation of Finite Population Variance Under Stratified Sampling in the Presence of Measurement Errors.
- Author
-
Haq, Abdul, Usman, Muhammad, and Khan, Manzoor
- Subjects
- *
SAMPLING methods , *STATISTICAL errors - Abstract
Measurement errors may significantly distort the properties of an estimator. In this paper, estimators of the finite population variance using the information on first and second raw moments of the study variable are developed under stratified random sampling that incorporate the variance of a measurement error component. Additionally, combined and separate estimators are also developed for estimating the finite population variance using supplementary information in terms of one or two auxiliary variables. An empirical study is carried out to study the effect of measurement error on the relative efficiencies of the proposed estimators. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Estimation of population mean using ranked set sampling in the presence of measurement errors.
- Author
-
Ahmadini, Abdullah Ali H., Singh, Rajesh, Raghav, Yashpal Singh, and Kumari, Anamika
- Subjects
- *
MEASUREMENT errors , *SAMPLING errors , *STATISTICAL sampling - Abstract
Ranked set sampling is widely acknowledged for its superior efficiency compared with simple random sampling. Only a small amount of work has been conducted using ranked set sampling when measurement errors are present. This study introduces innovative estimators utilizing ranked set sampling to assess the population mean when faced with both correlated and uncorrelated measurement errors. The expressions for the bias and mean squared error of the proposed estimators are derived up to the first-order approximation, revealing their superior performance compared to the other examined estimators. The efficacy of the suggested estimators in handling measurement errors was demonstrated through numerical illustration and simulation study investigations. The recommended estimators are further compared to the existing ones using the percentage relative efficiency and mean squared error, and the impact of measurement errors on the results is highlighted through the percentage computation of measurement errors. The innovative estimators suggested were formulated by judiciously incorporating ratio, exponential, and log estimators. Numerical examples involving expenditure and income, as well as simulated data generated from a normal population using R software, affirm the superior performance of the proposed estimators over existing ones such as the usual mean estimator and those proposed by Vishwakarma and Singh (2022), as evidenced by the higher percent relative efficiency and lower mean squared error. The evaluation of the percentage contribution of measurement error values confirms the impact of measurement errors on the properties of the estimators. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Measurement error models with zero inflation and multiple sources of zeros, with applications to hard zeros.
- Author
-
Bhadra, Anindya, Wei, Rubin, Keogh, Ruth, Kipnis, Victor, Midthune, Douglas, Buckman, Dennis W., Su, Ya, Chowdhury, Ananya Roy, and Carroll, Raymond J.
- Subjects
ERRORS-in-variables models ,LATENT variables ,ALCOHOLIC beverages ,MEASUREMENT errors ,SURVIVAL analysis (Biometry) - Abstract
We consider measurement error models for two variables observed repeatedly and subject to measurement error. One variable is continuous, while the other variable is a mixture of continuous and zero measurements. This second variable has two sources of zeros. The first source is episodic zeros, wherein some of the measurements for an individual may be zero and others positive. The second source is hard zeros, i.e., some individuals will always report zero. An example is the consumption of alcohol from alcoholic beverages: some individuals consume alcoholic beverages episodically, while others never consume alcoholic beverages. However, with a small number of repeat measurements from individuals, it is not possible to determine those who are episodic zeros and those who are hard zeros. We develop a new measurement error model for this problem, and use Bayesian methods to fit it. Simulations and data analyses are used to illustrate our methods. Extensions to parametric models and survival analysis are discussed briefly. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Optimal designs of accelerated degradation tests with random shock failures and measurement errors.
- Author
-
Wu, Lin, Zhou, Xiao‐Dong, and Yue, Rong‐Xian
- Subjects
IMPERIALIST competitive algorithm ,DISTRIBUTION (Probability theory) ,MEASUREMENT errors ,FAILURE mode & effects analysis ,ACCELERATED life testing - Abstract
Accelerated degradation tests (ADTs) are widely used for assessing the reliability of long‐life products. During an ADT, accelerated stresses not only expedite the degradation of test products but also increase the likelihood of encountering traumatic shocks. Moreover, it is important to acknowledge that measurement errors can be inevitable during the observation process of an ADT. Unfortunately, these errors are often overlooked in the optimal design of the ADT, especially when multiple competing failure modes are present. In this article, we propose a new approach to design ADTs when measurement errors exist and test products suffer from degradation failures and random shock failures. We utilize the Wiener process to model the degradation path, incorporating normally distributed measurement errors, and an exponential distribution to fit the time between random shock failures. Given the number of test products and the termination time, we optimize the ADT plans under three common design criteria. The equivalence theorem is used to verify the optimality of the optimal ADT plans. A real‐life example and sensitivity analysis are provided to illustrate our proposed method. The results demonstrate that when competing failure modes are present, the optimal ADT plans, which account for measurement errors, differ significantly from those that do not consider such errors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Handling missing data and measurement error for early-onset myopia risk prediction models
- Author
-
Hongyu Lai, Kaiye Gao, Meiyan Li, Tao Li, Xiaodong Zhou, Xingtao Zhou, Hui Guo, and Bo Fu
- Subjects
Risk prediction ,Missing data ,Measurement error ,Early-onset myopia ,Multiple imputation ,Missing mechanisms ,Medicine (General) ,R5-920 - Abstract
Abstract Background Early identification of children at high risk of developing myopia is essential to prevent myopia progression by introducing timely interventions. However, missing data and measurement error (ME) are common challenges in risk prediction modelling that can introduce bias in myopia prediction. Methods We explore four imputation methods to address missing data and ME: single imputation (SI), multiple imputation under missing at random (MI-MAR), multiple imputation with calibration procedure (MI-ME), and multiple imputation under missing not at random (MI-MNAR). We compare four machine-learning models (Decision Tree, Naive Bayes, Random Forest, and Xgboost) and three statistical models (logistic regression, stepwise logistic regression, and least absolute shrinkage and selection operator logistic regression) in myopia risk prediction. We apply these models to the Shanghai Jinshan Myopia Cohort Study and also conduct a simulation study to investigate the impact of missing mechanisms, the degree of ME, and the importance of predictors on model performance. Model performance is evaluated using the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC). Results Our findings indicate that in scenarios with missing data and ME, using MI-ME in combination with logistic regression yields the best prediction results. In scenarios without ME, employing MI-MAR to handle missing data outperforms SI regardless of the missing mechanisms. When ME has a greater impact on prediction than missing data, the relative advantage of MI-MAR diminishes, and MI-ME becomes more superior. Furthermore, our results demonstrate that statistical models exhibit better prediction performance than machine-learning models. Conclusion MI-ME emerges as a reliable method for handling missing data and ME in important predictors for early-onset myopia risk prediction.
- Published
- 2024
- Full Text
- View/download PDF
39. Silence is golden, but my measures still see—why cheaper-but-noisier outcome measures in large simple trials can be more cost-effective than gold standards
- Author
-
Benjamin Woolf, Hugo Pedder, Henry Rodriguez-Broadbent, and Phil Edwards
- Subjects
Outcome assessment ,Questionnaires ,Measurement error ,Sampling error ,Loss to follow-up ,Response rate ,Medicine (General) ,R5-920 - Abstract
Abstract Objective To assess the cost-effectiveness of using cheaper-but-noisier outcome measures, such as a short questionnaire, for large simple clinical trials. Background To detect associations reliably, trials must avoid bias and random error. To reduce random error, we can increase the size of the trial and increase the accuracy of the outcome measurement process. However, with fixed resources, there is a trade-off between the number of participants a trial can enrol and the amount of information that can be collected on each participant during data collection. Methods To consider the effect on measurement error of using outcome scales with varying numbers of categories, we define and calculate the variance from categorisation that would be expected from using a category midpoint; define the analytic conditions under which such a measure is cost-effective; use meta-regression to estimate the impact of participant burden, defined as questionnaire length, on response rates; and develop an interactive web-app to allow researchers to explore the cost-effectiveness of using such a measure under plausible assumptions. Results An outcome scale with only a few categories greatly reduced the variance of non-measurement. For example, a scale with five categories reduced the variance of non-measurement by 96% for a uniform distribution. We show that a simple measure will be more cost-effective than a gold-standard measure if the relative increase in variance due to using it is less than the relative increase in cost from the gold standard, assuming it does not introduce bias in the measurement. We found an inverse power law relationship between participant burden and response rates such that a doubling the burden on participants reduces the response rate by around one third. Finally, we created an interactive web-app ( https://benjiwoolf.shinyapps.io/cheapbutnoisymeasures/ ) to allow exploration of when using a cheap-but-noisy measure will be more cost-effective using realistic parameters. Conclusion Cheaper-but-noisier questionnaires containing just a few questions can be a cost-effective way of maximising power. However, their use requires a judgement on the trade-off between the potential increase in risk of information bias and the reduction in the potential of selection bias due to the expected higher response rates.
- Published
- 2024
- Full Text
- View/download PDF
40. AFFECT: an R package for accelerated functional failure time model with error-contaminated survival times and applications to gene expression data
- Author
-
Li-Pang Chen and Hsiao-Ting Huang
- Subjects
Boosting ,Gene expression ,Measurement error ,Survival analysis ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Biology (General) ,QH301-705.5 - Abstract
Abstract Background Survival analysis has been used to characterize the time-to-event data. In medical studies, a typical application is to analyze the survival time of specific cancers by using high-dimensional gene expressions. The main challenges include the involvement of non-informaive gene expressions and possibly nonlinear relationship between survival time and gene expressions. Moreover, due to possibly imprecise data collection or wrong record, measurement error might be ubiquitous in the survival time and its censoring status. Ignoring measurement error effects may incur biased estimator and wrong conclusion. Results To tackle those challenges and derive a reliable estimation with efficiently computational implementation, we develop the R package AFFECT, which is referred to Accelerated Functional Failure time model with Error-Contaminated survival Times. Conclusions This package aims to correct for measurement error effects in survival times and implements a boosting algorithm under corrected data to determine informative gene expressions as well as derive the corresponding nonlinear functions.
- Published
- 2024
- Full Text
- View/download PDF
41. Sensitivity to Initial Data Errors in Interpreting Temperature Logging of an Isolated Injection Well Segment
- Author
-
K. A. Potashev, D. R. Salimyanova, A. B. Mazo, A. A. Davletshin, and A. V. Kosterin
- Subjects
oil reservoir ,temperature logging ,measurement error ,geological uncertainty ,inverse problem ,numerical modeling ,Mathematics ,QA1-939 - Abstract
This study considers the inverse problems inherent in interpreting temperature logging data from an isolated segment of the injection well in order to ascertain its operating period and the thermophysical properties of the oil reservoir.The forward problem of thermal conductivity was reduced to a one-dimensional axisymmetric formulation within the oil reservoir layer, disregarding the vertical thermal exchange with neighboring layers.The inverse problem of determining the well operating period was solved by reformulating the forward problem with regard to the temperature field derivative, which enabled the use of first-order optimization methods. Thus, Nesterov’s method was applied. An algorithm to automatically scale one of the method’s parameters (step length) was developed, and the optimal value of the second parameter (inertial step) was calculated. This increased the efficiency of the method by 10 – 15 % in solving the problem under consideration.The algorithm’s stability against perturbations in the initial data on temperature and thermophysical properties was demonstrated. The sensitivity analysis revealed that a 1 % error in the temperature measurements results in a standard deviation of the solution, which is about 2 % from the true value of the well operating period. A similar level of error was seen when the thermal diffusivity was overor underestimated by approximately 15 %. The solution was little sensitive to variations in the heat transfer coefficient between the oil reservoir and the well at characteristic magnitudes; even with a twofold distortion, the error in the determination of the well operating period did not exceed 1.5 %. To mitigate the error in thermometry interpretation to 1 %, temperature measurements must have an error margin of no more than 0.25 %, alongside precisely specified thermophysical properties of the oil reservoir, or, alternatively, when temperature is measured accurately, the rock thermal diffusivity must be set within an error margin of less than 3 %, but it is nearly impossible under real conditions.Increasing the number of temperature measurements diminishes the sensitivity to measurement errors, with the optimal efficacy achieved at 10 measurements, rendering further increments impractical.Therefore, the algorithm’s stability and the solution’s sensitivity of the inverse problem of determining the reservoir thermal diffusivity for a given operating period of the well relative to temperature measurement errors were found. The results show that a 1 % error in temperature measurements leads to a standard deviation of about 6 % from the true value.
- Published
- 2024
- Full Text
- View/download PDF
42. Measuring the Physiologic Use Conditions of Medical Devices: An Introduction
- Author
-
Lahm, Ryan, Baxter, Walt, Baxter, Walt, editor, and Lahm, Ryan, editor
- Published
- 2024
- Full Text
- View/download PDF
43. The Accuracy Research of the Exposed Incremental Encoder Based on Image Formation Method for SEMS Applications
- Author
-
Kuznetsov, Vladimir N., Garmaev, Ayur T., Deyneka, Ivan G., Vasilev, Aleksandr S., Kacprzyk, Janusz, Series Editor, Novikov, Dmitry A., Editorial Board Member, Shi, Peng, Editorial Board Member, Cao, Jinde, Editorial Board Member, Polycarpou, Marios, Editorial Board Member, Pedrycz, Witold, Editorial Board Member, Tarasova, Irina Leonidovna, editor, and Kulik, Boris Alexandrovich, editor
- Published
- 2024
- Full Text
- View/download PDF
44. Zvi Griliches (1930–1999)
- Author
-
Berman, Eli, Jaffe, Adam B., and Cord, Robert A., editor
- Published
- 2024
- Full Text
- View/download PDF
45. A Two-Stage Approach to a Latent Variable Mixed-Effects Location-Scale Model
- Author
-
Blozis, Shelley A., Lai, Mark H. C., Wiberg, Marie, Kim, Jee-Seon, Hwang, Heungsun, editor, Wu, Hao, editor, and Sweet, Tracy, editor
- Published
- 2024
- Full Text
- View/download PDF
46. Investigating the Impact of Equating on Measurement Error Using Generalizability Theory
- Author
-
Li, Dongmei, Wiberg, Marie, Kim, Jee-Seon, Hwang, Heungsun, editor, Wu, Hao, editor, and Sweet, Tracy, editor
- Published
- 2024
- Full Text
- View/download PDF
47. Measurement Techniques for the Composition of Air Environments: Development and Application
- Author
-
Ponomareva, Olga B., Kanaeva, Yulia V., Gaiko, Mariia V., Sobina, Egor P., editor, Medvedevskikh, Sergey V., editor, Kremleva, Olga N., editor, Filimonov, Ivan S., editor, Kulyabina, Elena V., editor, Kolobova, Anna V., editor, Bulatov, Andrey V., editor, and Dobrovolskiy, Vladimir I., editor
- Published
- 2024
- Full Text
- View/download PDF
48. Method for Taking into Account Measurement Errors When Sorting Elements into Selective Groups
- Author
-
Filipovich, O., Nevar, G., Balakina, N., Voloshina, N., Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Tan, Kay Chen, Series Editor, Radionov, Andrey A., editor, and Gasiyarov, Vadim R., editor
- Published
- 2024
- Full Text
- View/download PDF
49. Study on the Influence of Antenna Arrays Anti-Jamming Algorithms on GNSS Receiver Single Point Position
- Author
-
Wang, Yaoding, Chen, Si, Hou, Yuzhuo, Su, Chengeng, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Tan, Kay Chen, Series Editor, Yang, Changfeng, editor, and Xie, Jun, editor
- Published
- 2024
- Full Text
- View/download PDF
50. Estimation of the density for censored and contaminated data.
- Author
-
Van Keilegom, Ingrid and Kekeç, Elif
- Subjects
- *
ERRORS-in-variables models , *DURATION of pregnancy , *LAGUERRE polynomials , *ASYMPTOTIC normality , *MEASUREMENT errors , *SURVIVAL analysis (Biometry) , *CENSORING (Statistics) - Abstract
Consider a situation where one is interested in estimating the density of a survival time that is subject to random right censoring and measurement errors. This happens often in practice, like in public health (pregnancy length), medicine (duration of infection), ecology (duration of forest fire), among others. We assume a classical additive measurement error model with Gaussian noise and unknown error variance and a random right censoring scheme. Under this setup, we develop minimal conditions under which the assumed model is identifiable when no auxiliary variables or validation data are available, and we offer a flexible estimation strategy using Laguerre polynomials for the estimation of the error variance and the density of the survival time. The asymptotic normality of the proposed estimators is established, and the numerical performance of the methodology is investigated on both simulated and real data on gestational age. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.