3,977 results on '"Random effects"'
Search Results
2. Subgroup learning for multiple mixed-type outcomes with block-structured covariates
- Author
-
Zhao, Xun, Tang, Lu, Zhang, Weijia, and Zhou, Ling
- Published
- 2025
- Full Text
- View/download PDF
3. Evaluating the impact of misspecified spatial neighboring structures in Bayesian CAR models
- Author
-
Somua-Wiafe, Ernest, Minkah, Richard, Doku-Amponsah, Kwabena, Asiedu, Louis, Acheampong, Edward, and Iddi, Samuel
- Published
- 2025
- Full Text
- View/download PDF
4. Assessment of supervised longitudinal learning methods: Insights from predicting low birth weight and very low birth weight using prenatal ultrasound measurements
- Author
-
Zhang, Cancan, Yu, Xiufan, and Zhang, Bo
- Published
- 2024
- Full Text
- View/download PDF
5. Extending the Vertical Model: An Alternative Approach to Competing Risks with Clustered Data
- Author
-
Battaglia, Salvatore, Fiocco, Marta, Putter, Hein, Pollice, Alessio, editor, and Mariani, Paolo, editor
- Published
- 2025
- Full Text
- View/download PDF
6. Likelihood-Based Boosting for Variance Components Selection in Linear Mixed Models
- Author
-
Battauz, Michela, Vidoni, Paolo, Pollice, Alessio, editor, and Mariani, Paolo, editor
- Published
- 2025
- Full Text
- View/download PDF
7. The effect of verdict system on juror decisions: a quantitative meta-analysis.
- Author
-
Jackson, Elaine, Curley, Lee, Leverick, Fiona, and Lages, Martin
- Abstract
We study the effect of the Scottish three-verdict system (guilty, not guilty, not proven) and the Anglo-American two-verdict system (guilty, not guilty) on juror decisions by combining data sets from 10 mock trials reported in suitable studies. A logistic regression with random effects uses the exact number of convictions and acquittals in 10 mock trials from a total of 1778 jurors to reliably estimate the effect of verdict system. We found a statistically significant verdict effect suggesting that the odds for a conviction by a juror are about 0.6 times or 40% lower under the three-verdict system than under a conventional two-verdict system. Possible explanations and implications of this verdict effect are discussed. This finding helps to better understand juror decision making in the context of the current reform of the Scottish three-verdict system into a two-verdict system. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
8. The effects of deprivation, age, and regional differences in COVID-19 mortality from 2020 to 2022: a retrospective analysis of public provincial data.
- Author
-
Chen, Anqi A., Renouf, Elizabeth M., Dean, Charmaine B., and Hu, X. Joan
- Subjects
- *
SARS-CoV-2 Omicron variant , *COVID-19 , *PUBLIC health , *DEATH rate , *COVID-19 pandemic - Abstract
Background: Coronavirus disease (COVID-19) quickly spread around the world after its initial identification in Wuhan, China in 2019 and became a global public health crisis. COVID-19 related hospitalizations and deaths as important disease outcomes have been investigated by many studies while less attention has been given to the relationship between these two outcomes at a public health unit level. In this study, we aim to establish the relationship of counts of deaths and hospitalizations caused by COVID-19 over time across 34 public health units in Ontario, Canada, taking demographic, geographic, socio-economic, and vaccination variables into account. Methods: We analyzed daily data of the 34 health units in Ontario between March 1, 2020 and June 30, 2022. Associations between numbers of COVID-19 related deaths and hospitalizations were explored over three subperiods according to the availability of vaccines and the dominance of the Omicron variant in Ontario. A generalized additive model (GAM) was fit in each subperiod. Heterogeneity across public health units was formulated via a random intercept in each of the models. Results: Mean daily COVID-19 deaths increased quickly as daily hospitalizations increased, particularly when daily hospitalizations were less than 20. In all the subperiods, mean daily deaths of a public health unit was significantly associated with its population size and the proportion of confirmed cases in subjects over 60 years old. The proportion of fully vaccinated (2 doses of primary series) people in the 60 + age group was a significant factor after the availability of the COVID-19 vaccines. The deprivation index, a measure of poverty, had a significantly positive effect on COVID-19 mortality after the dominance of the Omicron variant in Ontario. Quantification of these effects was provided, including effects related to public health units. Conclusions: The differences in COVID-19 mortality across health units decreased over time, after adjustment for other covariates. In the last subperiod when most public health protections were released and the Omicron variant dominated, the least advantaged group might suffer higher COVID-19 mortality. Interventions such as paid sick days and cleaner indoor air should be made available to counter lifting of health protections. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
9. Context matters: The importance of investigating random effects in hierarchical models for early childhood education researchers.
- Author
-
Corkins, Clarissa M., Harrist, Amanda W., Washburn, Isaac J., Hubbs-Tait, Laura, Topham, Glade L., and Swindle, Taren
- Subjects
- *
RANDOM effects model , *EARLY childhood education , *RESEARCH personnel , *EDUCATION research , *ACCOUNTING students - Abstract
• Nested or hierarchical school- and center-based data benefit from HLM analysis. • School-level effects can moderate relations between student/classroom variables. • The presence of random effects can be identified without being measured. This paper highlights the importance of examining individual, classroom, and school-level variables simultaneously in early childhood education research. While it is well known that Hierarchical Linear Modeling (HLM) in school-based studies can be used to account for the clustering of students within classrooms or schools, less known is that HLM can use random effects to investigate how higher-level factors (e.g., effects that vary by school) moderate associations between lower-level factors. This possible moderation can be detected even if higher-level data are not collected. Despite this important use of HLM, a clear resource explaining how to test this type of effect is not available for early childhood researchers. This paper demonstrates this use of HLM by presenting three analytic examples using empirical early childhood education data. First, we review school-level effects literature and HLM concepts to provide the rationale for testing cross-level moderation effects in education research; next we do a short review of literature on the variables that will be used in our three examples (viz., teacher beliefs and student socioemotional behavior); next we describe the dataset that will be analyzed; and finally we guide the reader step-by-step through analyses that show the presence and absence of fixed effects of teacher beliefs on student social outcomes and the erroneous conclusions that can occur if school-level moderation (i.e., random effects) tests are excluded from analyses. This paper provides evidence for the importance of testing for how teachers and students impact each other as a function of school differences, shows how this can be accomplished, and highlights the need to examine random effects of clustering in educational models to ensure the full context is accounted for when predicting student outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
10. Non-proportional hazards model with a PVF frailty term: application with a melanoma dataset.
- Author
-
Rosa, Karen C., Calsavara, Vinicius F., and Louzada, Francisco
- Subjects
- *
MAXIMUM likelihood statistics , *CLINICAL trials , *SKIN cancer , *HAZARD function (Statistics) , *PROGNOSIS , *SURVIVAL analysis (Biometry) - Abstract
Survival data analysis often uses the Cox proportional hazards (PH) model. This model is widely applied due to its straightforward interpretation of the hazard ratio under the assumption that the hazard rates for two subjects remain constant over time. However, in several randomized clinical trials with long-term survival data comparing two new treatments, it is frequently observed that Kaplan-Meier plots exhibit crossing survival curves. This violation of the PH assumption of the Cox PH model can not be applied to evaluate the treatment's effect on survival. This paper introduces a novel long-term survival model with non-PH that incorporates a frailty term into the hazard function. This model allows us to examine the effect of prognostic factors on survival and quantify the degree of unobservable heterogeneity. The model parameters are estimated using the maximum likelihood estimation procedure, and we evaluate the performance of the proposed models through simulation studies. Additionally, we demonstrate the applicability of our approach by fitting the models to a real skin cancer dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
11. Evaluating Transition Rules for Enhancing Fairness in Bonus–Malus Systems: An Application to the Saudi Arabian Auto Insurance Market.
- Author
-
Alyafie, Asrar, Constantinescu, Corina, and Yslas, Jorge
- Subjects
INSURANCE companies ,AUTOMOBILE insurance ,INSURANCE premiums ,POLICYHOLDERS ,RELATIVITY - Abstract
A Bonus–Malus System (BMS) is a ratemaking mechanism used in insurance to adjust premiums based on a policyholder's claim history, with the goal of segmenting risk profiles more accurately. A BMS typically comprises three key components: the number of BMS levels, the transition rules dictating the movements of policyholders within the system, and the relativities used to determine premium adjustments. This paper explores the impact of modifications to these three elements on risk classification, assessed through the mean squared error. The model parameters are calibrated with real-world data from the Saudi auto insurance market. We begin the analysis by focusing on transition rules based solely on claim frequency, a framework in which most implemented BMSs work, including the current Saudi BMS. We then consider transition rules that depend on frequency and severity, in which higher penalties are given for large claim sizes. The results show that increasing the number of levels typically improves risk segmentation but requires balancing practical implementation constraints and that the adequate selection of the penalties is critical to enhancing fairness. Moreover, the study reveals that incorporating a severity-based penalty enhances risk differentiation, especially when there is a dependence between the claim frequency and severity. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
12. Risk adjustment-based statistical control charts with a functional covariate.
- Author
-
Hu, Qiuyan and Liu, Liu
- Subjects
STATISTICAL process control ,QUALITY control charts ,PRINCIPAL components analysis ,LOGISTIC regression analysis ,MOVING average process - Abstract
In real applications, there are situations in which quality characteristics are significantly affected by function-valued covariates such as electrocardiogram (ECG) signals; however, previous surveillance methods with covariate adjustment mainly use scalar-valued covariates to construct control charts to investigate the stability of nonindustrial processes, such as in the medical and public health fields. Few existing approaches address the infinite-dimensional regression encountered with function-valued covariates. Thus, an applicable scheme needs to be developed. Here, by relaxing the assumption of normality, a novel surveillance strategy for functional logistic regression (FLR) is proposed. In this approach, the response is a binary variable, and the covariate information is obtained from functional data. To address the above problems, functional principal component analysis (FPCA) is used to extend the score test to situations with functional covariates. Moreover, the score statistic is integrated into an exponentially weighted moving average (EWMA) charting scheme to detect abnormal changes in the location and scale parameters based on additional information. The simulation results show that the proposed approach is more efficient than previous approaches in detecting small to moderate shifts. Finally, the proposed charting scheme is applied to a real case study to demonstrate its utility and applicability. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
13. THE RELEVANCE OF FINANCIAL PERFORMANCE IN DETERMINING STOCK PRICES OF INSURANCE COMPANIES.
- Author
-
Karki, Dipendra, Dahal, Rewan Kumar, Perera, Wasantha K. L., Wimalasiri, Eranga M., and Ghimire, Kishor
- Subjects
STOCKS (Finance) ,EARNINGS per share ,ECONOMIC indicators ,INSURANCE companies ,LIFE insurance companies ,INSURANCE premiums - Abstract
Purpose. Despite their growth and economic significance, insurance stock prices raise concerns. This study investigates the factors influencing stock prices in the Nepalese insurance industry, specifically focusing on the relationship between financial performance indicators and stock pricing. Design/methodology/approach. This research employs panel data regression analysis spanning eight years (FY 2014/15 to 2021/22) on ten insurance companies, disaggregated into life and non-life subsets. It investigates financial variables such as Return on Assets (ROA), Earnings per Share (EPS), Return on Equity (ROE), Net Profit Margin (NPM), and Book Value per Share (BVPS). Findings. The research reveals a consistently positive and significant influence of EPS (p<0.05) on stock prices across all models. The Random Effects Model confirms that only EPS and ROE significantly affect stock prices in life and non-life insurance companies, with ROE exhibiting a notable negative impact. ROA, NPM, and BVPS show no significance, indicating variability in their impact on stock pricing. Research implications. This study provides practitioners with insights into financial factors driving stock prices, aiding strategic decision-making. The findings contribute to a deeper understanding of the dynamics of the Nepalese insurance market and offer guidance for future research and policy interventions. Originality/value. This study uniquely analyzes the insurance sector by incorporating life and non-life subsets, providing a more detailed analysis. Employing robust analytical techniques, it comprehensively explores the relationship between financial performance and stock pricing, contributing empirical evidence and insights to industry stakeholders and academia. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Modeling product degradation with heterogeneity: A general random-effects Wiener process approach.
- Author
-
Zhai, Qingqing, Li, Yaqiu, and Chen, Piao
- Subjects
- *
INVERSE Gaussian distribution , *MONTE Carlo method , *WIENER processes , *EXPECTATION-maximization algorithms , *FICK'S laws of diffusion - Abstract
AbstractDegradation of many products in practical applications is often subject to unit-to-unit heterogeneity. Such heterogeneity can be attributed to the heterogeneous quality of the raw materials and the fluctuation of the manufacturing process, as well as the heterogeneous usage conditions and environments. The heterogeneity leads to the scattering of the degradation rates and diffusion intensities in the population. To model this phenomenon, this study proposes a general random-effects Wiener process model that accounts for the unit-to-unit heterogeneity in the degradation drift and the volatility simultaneously. In particular, the drift of the Wiener process is characterized by a normal distribution and the diffusion parameter is characterized by an independent inverse Gaussian distribution. The proposed model is flexible for characterization of heterogeneous degradation, and permits an analytically tractable model inference. An EM algorithm incorporating the variational Bayesian method is developed to estimate the model parameters, and a parametric bootstrap approach is proposed to construct confidence intervals. The performance of the proposed model and the estimation algorithm is validated by Monte Carlo simulations. The degradation data of an infrared LED device and the wear data of the magnetic head of a hard disk drive are studied based on the proposed model. With comprehensive comparative studies, the good performance of the proposed model in fitting the real degradation data is validated. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Predicting forest parameters through generalized linear mixed models using GEDI metrics in a temperate forest in Oaxaca, Mexico.
- Author
-
Ortiz-Reyes, Alma Delia, Barrera-Ortega, Daisy, Velasco-Bautista, Efraín, Romero-Sánchez, Martin Enrique, and Correa-Díaz, Arian
- Subjects
- *
STANDARD deviations , *INDEPENDENT variables , *TEMPERATE forests , *RANDOM effects model , *GAMMA distributions - Abstract
Lidar sensors are active remote sensing instruments that provide data on forest structure, which serve as inputs in the prediction of forest parameters. The objective of this study was to evaluate the performance of mixed-effects models to predict basal area (BA), aboveground biomass (AGB), and wood volume (VOL) from GEDI (Global Ecosystem Dynamics Investigation) predictor variables. Linear mixed and generalized linear mixed models (LMM and GLMM) were fitted with a clustering structure as a random effect between forest parameters and GEDI metrics. LMM and GLMM performed similarly regarding root mean square error (RMSE) and correlation coefficients (r) between observed and predicted data. Our results showed moderate correlations between RH50 and BA (r = 0.53), AGB (r = 0.54), and RH80 with VOL (r = 0.62). Another finding was the correlation between GEDI predictions of the AGB density and the biomass predicted in this study, with an agreement of 79% and 77% and a RMSE of 76.95 and 82.34 Mg ha−1 for LMM and GLMM, respectively. We conclude that the use of mixed models allowed for the inclusion of both fixed and random effects that capture the systematic and random variation in the data, and they provide a flexible framework for modelling forest parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Spatio-Temporal Modeling for Record-Breaking Temperature Events in Spain.
- Author
-
Castillo-Mateo, Jorge, Gelfand, Alan E., Gracia-Tabuenca, Zeus, Asín, Jesús, and Cebrián, Ana C.
- Subjects
- *
GLOBAL warming , *LOGISTIC regression analysis , *CLIMATE change , *STATISTICS , *DATA analysis - Abstract
AbstractRecord-breaking temperature events are now very frequently in the news, viewed as evidence of climate change. With this as motivation, we undertake the first substantial spatial modeling investigation of temperature record-breaking across years for any given day within the year. We work with a dataset consisting of over 60 years (1960–2021) of daily maximum temperatures across peninsular Spain. Formal statistical analysis of record-breaking events is an area that has received attention primarily within the probability community, dominated by results for the stationary record-breaking setting with some additional work addressing trends. Such effort is inadequate for analyzing actual record-breaking data. Resulting from novel and detailed exploratory data analysis, we propose rich hierarchical conditional modeling of the indicator events which define record-breaking sequences. After suitable model selection, we discover explicit trend behavior, necessary autoregression, significance of distance to the coast, useful interactions, helpful spatial random effects, and very strong daily random effects. Illustratively, the model estimates that global warming trends have increased the number of records expected in the past decade almost 2-fold, 1.93 (1.89,1.98) , but also estimates highly differentiated climate warming rates in space and by season. Supplementary materials for this article are available online, including a standardized description of the materials available for reproducing the work. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Analyzing Matched 2 × 2 Tables from all Corners.
- Author
-
Aerts, Marc and Molenberghs, Geert
- Subjects
- *
GENERALIZED estimating equations , *MYOCARDIAL infarction , *PRIME ministers - Abstract
Squared 2 × 2 tables with binary data from matched pairs are typically analyzed using Cochran-Mantel-Haenszel methodology, conditional logistic regression, or random intercepts logistic regression. These are all "pair-specific" type of approaches. However, many more methods and models for clustered binary data, including marginal models and marginalizable pair-specific models, can be applied. We provide a comprehensive overview of methods and apply them all to two well-known example datasets, the prime minister's performance and the myocardial infarction datasets. The simple setting of matched binary data allows us to compare and relate different models, methods and their estimates. A technical explanation is given for why in some settings boundary estimates are obtained. for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Person Specific Parameter Heterogeneity in the 2PL IRT Model.
- Author
-
Perez, Alexandra Lane and Loken, Eric
- Subjects
- *
ITEM response theory , *MULTILEVEL models , *HETEROGENEITY , *ALGORITHMS - Abstract
Following Kelderman and Molenaar's demonstration that a factor model with person specific factor loadings is almost indistinguishable from the standard factor model in terms of overall fit, we examined person specific measurement models in Item Response Theory, person specific discrimination and difficulty parameters were created by adding random variation at the item by person level. Using standard fitting algorithms for the 2PL IRT there was modest evidence of person- or item-level misfit using common diagnostic tools. The item difficulties were well-estimated, but the item discriminations were noticeably underestimated. As found by Kelderman and Molenaar, factor scores were estimated with less than expected reliability due to the underlying heterogeneity. The person specific models considered here are basically limiting cases of IRT models with multilevel, mixture, or differential item functioning structure. We conclude with some thoughts regarding real-world sources of heterogeneity that might go unacknowledged in common testing applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Intrinsic Functional Partially Linear Poisson Regression Model for Count Data.
- Author
-
Xu, Jiaqi, Lu, Yu, Su, Yuanshen, Liu, Tao, Qi, Yunfei, and Xie, Wu
- Subjects
- *
POISSON regression , *REGRESSION analysis , *KERNEL functions , *DATA analysis , *DATA modeling - Abstract
Poisson regression is a statistical method specifically designed for analyzing count data. Considering the case where the functional and vector-valued covariates exhibit a linear relationship with the log-transformed Poisson mean, while the covariates in complex domains act as nonlinear random effects, an intrinsic functional partially linear Poisson regression model is proposed. This model flexibly integrates predictors from different spaces, including functional covariates, vector-valued covariates, and other non-Euclidean covariates taking values in complex domains. A truncation scheme is applied to approximate the functional covariates, and the random effects related to non-Euclidean covariates are modeled based on the reproducing kernel method. A quasi-Newton iterative algorithm is employed to optimize the parameters of the proposed model. Furthermore, to capture the intrinsic geometric structure of the covariates in complex domains, the heat kernel is employed as the kernel function, estimated via Brownian motion simulations. Both simulation studies and real data analysis demonstrate that the proposed method offers significant advantages over the classical Poisson regression model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. A regional analysis of climate change effects on global snow crab fishery.
- Author
-
Tao, Jingjing, Quagrainie, Kwamena K., Foster, Kenneth A., and Widmar, Nicole Olynk
- Subjects
CRAB populations ,SNOWPACK augmentation ,RANDOM effects model ,FISHERY policy ,FISHERY management ,SEA ice - Abstract
The snow crab fishery faces increasing vulnerability to environmental factors, yet the literature on the relationship between climate change and snow crab harvest remains limited. This study estimates snow crab harvest functions using climate change indicators with unbalanced panel data of snow crab production from the eastern Bering Sea (Alaska), the southern Gulf of St. Lawrence (Canada), the Sea of Japan, and the Barents Sea (Norway‐Russia). The relationship between snow crab biomass, stock, and catch is analyzed and the endogeneity of stock in the harvest function is also addressed using climate change indicators as instrumental variables (IVs). The results show that the extent of Arctic sea ice is effective in addressing the endogeneity, and the random effects IV model with error components two stage least squares estimator performs the best to control heterogeneity. A 1% increase in snow crab fishing effort is associated with a 0.42% increase in snow crab harvest, and a 1% increase in snow crab stock causes a 0.98% increase in snow crab harvest. The reported estimates indicate a large stock‐harvest elasticity and provide supporting evidence to prioritize stock enhancement in snow crab fishery policy designs to maintain stocks at sustainable levels and minimize government expenditures on subsidies. RecommendationsThis study explores how snow crab harvests are influenced by snow crab populations and fishing efforts in the context of global warming across various global regions, including the Bering Sea, the Gulf of St. Lawrence, the Sea of Japan, and the Barents Sea.A 1% increase in fishing effort is associated with a 0.42% increase in harvest, while a 1% increase in snow crab population leads to a 0.98% increase in harvest, showing a high dependency on snow crab biomass.Arctic sea ice extent is identified as a crucial climate factor affecting snow crab biomass and harvests, making it a valuable variable for understanding and managing snow crab populations.The study supports the prioritization of stock enhancement policies by fishery agencies and suggests standardizing how fishing effort is measured across different regions to improve snow crab fishery management and future research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Probability of detection for corrosion-induced steel mass loss using Fe–C coated LPFG sensors.
- Author
-
Zhuo, Ying, Ma, Pengfei, Guo, Chuanrui, and Chen, Genda
- Subjects
NONDESTRUCTIVE testing ,COLLECTING of accounts ,ACQUISITION of data ,DETECTORS ,SURFACE coatings - Abstract
The traditional probability of detection (POD) method, as described in the Department of Defense Handbook MIL-HDBK-1823A for nondestructive evaluation systems, does not take the time dependency of data collection into account. When applied to in situ sensors for the measurement of flaw sizes, such as fatigue-induced crack length and corrosion-induced mass loss, the validity and reliability of the traditional method is unknown. In this paper, the POD for in situ sensors and their associated reliability assessment for detectable flaw sizes are evaluated using a size-of-damage-at-detection (SODAD) method and a random parameter model (RPM). Although applicable to other sensors, this study is focused on long-period fiber gratings (LPFG) corrosion sensors with thin Fe–C coating. The SODAD method uses corrosion-induced mass losses when successfully detected from different sensors for the first time, while the RPM model considers the randomness and difference between mass loss datasets from different sensors. The Fe–C coated LPFG sensors were tested in 3.5 wt.% NaCl solution until the wavelength of transmission spectra did not change. The wavelength shift of 70% of the tested sensors ranged from 6 to 10 nm. Given a detection threshold of 2 nm in wavelength, the mass losses at 90% POD are 31.87%, 37.57%, and 34.00%, which are relatively consistent, and the upper-bound mass losses at 95% confidence level are 33.20%, 47.30%, and 40.83% from the traditional, SODAD, and RPM methods, respectively. In comparison with the SODAD method, the RPM method is more robust to any departure from model assumptions since significantly more data are used. For the 90% POD at 95% confidence level, the traditional method underestimated the mass loss by approximately 19%, which is unconservative in engineering applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Psychosis risk for lesbian, gay, and bisexual individuals: systematic review and meta-analysis.
- Subjects
- *
RISK assessment , *PSYCHOLOGY of gay people , *MEDICAL information storage & retrieval systems , *SUBSTANCE abuse , *PSYCHOLOGY of lesbians , *PARENT-child relationships , *META-analysis , *DESCRIPTIVE statistics , *SYSTEMATIC reviews , *MEDLINE , *ODDS ratio , *PSYCHOLOGICAL stress , *PSYCHOSES , *BISEXUAL people , *ONLINE information services , *CONFIDENCE intervals , *FACTOR analysis , *SOCIAL support , *PSYCHOSOCIAL factors , *PSYCHOLOGY information storage & retrieval systems , *SOCIAL defeat , *ADOLESCENCE - Abstract
The social defeat hypothesis posits that low status and repeated humiliation increase the risk for psychotic disorders (PDs) and psychotic experiences (PEs). The purpose of this paper was to provide a systematic review of studies on risk of PDs and PEs among lesbian, gay, or bisexual (LGB) people and a quantitative synthesis of any difference in risk. PubMed, PsycINFO, Embase, and Web of Science were searched from database inception until January 30, 2024. Two independent reviewers assessed the eligibility and quality of studies, extracted effect sizes, and noted the results of mediation analyses. Using a random effects model we computed pooled odds ratios (ORs). Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines were followed. The search identified seven studies of PDs and six of PEs. As for PDs, the unadjusted (2.13; 95% confidence interval 0.72–6.34) and covariate-adjusted pooled OR (2.24; 1.72–3.53) were not significantly increased for LGB individuals. After exclusion of a study of limited quality, both the unadjusted pooled OR (2.77; 1.21–6.32) and the covariate-adjusted pooled OR (2.67; 1.53–4.66) were significantly increased. The pooled ORs were increased for PEs: unadjusted, pooled OR = 1.97 (1.47–2.63), covariate-adjusted, pooled OR = 1.85 (1.50–2.28). Studies of PE that examined the mediating role of several variables reported that the contribution of drug abuse was small compared to that of psychosocial stressors. The results of a study in adolescents suggested a protective effect of parental support. These findings suggest an increased psychosis risk for LGB people and support the social defeat hypothesis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Enhancing Cardiovascular Risk Prediction: Development of an Advanced Xgboost Model with Hospital-Level Random Effects.
- Author
-
Dong, Tim, Oronti, Iyabosola Busola, Sinha, Shubhra, Freitas, Alberto, Zhai, Bing, Chan, Jeremy, Fudulu, Daniel P., Caputo, Massimo, and Angelini, Gianni D.
- Subjects
- *
MACHINE learning , *RANDOM effects model , *REGRESSION analysis , *SAMPLE size (Statistics) , *CARDIAC surgery , *LOGISTIC regression analysis - Abstract
Background: Ensemble tree-based models such as Xgboost are highly prognostic in cardiovascular medicine, as measured by the Clinical Effectiveness Metric (CEM). However, their ability to handle correlated data, such as hospital-level effects, is limited. Objectives: The aim of this work is to develop a binary-outcome mixed-effects Xgboost (BME) model that integrates random effects at the hospital level. To ascertain how well the model handles correlated data in cardiovascular outcomes, we aim to assess its performance and compare it to fixed-effects Xgboost and traditional logistic regression models. Methods: A total of 227,087 patients over 17 years of age, undergoing cardiac surgery from 42 UK hospitals between 1 January 2012 and 31 March 2019, were included. The dataset was split into two cohorts: training/validation (n = 157,196; 2012–2016) and holdout (n = 69,891; 2017–2019). The outcome variable was 30-day mortality with hospitals considered as the clustering variable. The logistic regression, mixed-effects logistic regression, Xgboost and binary-outcome mixed-effects Xgboost (BME) were fitted to both standardized and unstandardized datasets across a range of sample sizes and the estimated prediction power metrics were compared to identify the best approach. Results: The exploratory study found high variability in hospital-related mortality across datasets, which supported the adoption of the mixed-effects models. Unstandardized Xgboost BME demonstrated marked improvements in prediction power over the Xgboost model at small sample size ranges, but performance differences decreased as dataset sizes increased. Generalized linear models (glms) and generalized linear mixed-effects models (glmers) followed similar results, with the Xgboost models also excelling at greater sample sizes. Conclusions: These findings suggest that integrating mixed effects into machine learning models can enhance their performance on datasets where the sample size is small. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Optimal planning of step‐stress accelerated degradation test based on Tweedie exponential dispersion process with random effects.
- Author
-
He, Qian, Yan, Weian, Liu, Weidong, Bigaud, David, Xu, Xiaofan, and Lei, Zitong
- Subjects
- *
BUDGET , *STOCHASTIC processes , *TIME pressure , *SAMPLE size (Statistics) , *DISPERSION (Chemistry) , *ACCELERATED life testing - Abstract
The step stress accelerated degradation test (SSADT) is an effective tool for assessing the reliability of highly reliable products. However, conducting an SSADT is expensive and time consuming, and the obtained SSADT data has an impact on the accuracy of the subsequent product reliability index estimations. Consequently, devising a cost‐constrained SSADT plan that yields high‐precision reliability estimates poses a significant challenge. This paper focuses on the optimal design of SSADT for the Tweedie exponential dispersion process with random effect (TEDR), a general degradation model capable of describing product heterogeneity. Under given budget and boundary constraints, the optimal sample size, observation frequency and observation times at each stress level are obtained by minimizing the asymptotic variance of the estimated quantile life at normal operating conditions. The sensitivity and stability of the SSADT plan are also studied, and the results indicate the robustness of the optimal plan against slight parameters fluctuations. We use the expectation maximization (EM) algorithm to estimate TEDR parameters and reliability indicators under SSADT, providing a systematic method for obtaining the optimal SSADT plan under budget constraints. The proposed framework is illustrated using the case of LED chips data, showcasing its potential for practical application. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. A Flexible Adaptive Lasso Cox Frailty Model Based on the Full Likelihood.
- Author
-
Hohberg, Maike and Groll, Andreas
- Abstract
In this work, a method to regularize Cox frailty models is proposed that accommodates time‐varying covariates and time‐varying coefficients and is based on the full likelihood instead of the partial likelihood. A particular advantage of this framework is that the baseline hazard can be explicitly modeled in a smooth, semiparametric way, for example, via P‐splines. Regularization for variable selection is performed via a lasso penalty and via group lasso for categorical variables while a second penalty regularizes wiggliness of smooth estimates of time‐varying coefficients and the baseline hazard. Additionally, adaptive weights are included to stabilize the estimation. The method is implemented in the R function coxlasso, which is now integrated into the package PenCoxFrail, and will be compared to other packages for regularized Cox regression. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Optimal designs for nonlinear mixed-effects models using competitive swarm optimizer with mutated agents.
- Author
-
Cui, Elvis Han, Zhang, Zizhao, and Wong, Weng Kee
- Abstract
Nature-inspired meta-heuristic algorithms are increasingly used in many disciplines to tackle challenging optimization problems. Our focus is to apply a newly proposed nature-inspired meta-heuristics algorithm called CSO-MA to solve challenging design problems in biosciences and demonstrate its flexibility to find various types of optimal approximate or exact designs for nonlinear mixed models with one or several interacting factors and with or without random effects. We show that CSO-MA is efficient and can frequently outperform other algorithms either in terms of speed or accuracy. The algorithm, like other meta-heuristic algorithms, is free of technical assumptions and flexible in that it can incorporate cost structure or multiple user-specified constraints, such as, a fixed number of measurements per subject in a longitudinal study. When possible, we confirm some of the CSO-MA generated designs are optimal with theory by developing theory-based innovative plots. Our applications include searching optimal designs to estimate (i) parameters in mixed nonlinear models with correlated random effects, (ii) a function of parameters for a count model in a dose combination study, and (iii) parameters in a HIV dynamic model. In each case, we show the advantages of using a meta-heuristic approach to solve the optimization problem, and the added benefits of the generated designs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Tolerance Intervals Under a Class of Unbalanced Linear Mixed Models.
- Author
-
Oliva-Aviles, Cristian and Hauser, Paloma
- Subjects
- *
DRUG development , *PROBABILITY theory - Abstract
AbstractThe study of tolerance intervals for linear mixed models under unbalanced data has been mainly focused on one-way random-effects models. In this article, we present a method to compute both one- and two-sided (β,γ)-tolerance intervals that is applicable to a more general class of unbalanced linear mixed models. The proposed method is based on the concept of generalized pivotal quantities and thus relies on the derivation of pivotal quantities that are shown to be independent within this particular class of models. The method employs Monte Carlo sampling methods to obtain realizations of the quantities needed to compute the tolerance intervals of interest. Extensive simulation studies confirm that coverage probabilities of the computed tolerance intervals are consistently close to their nominal levels. Furthermore, to showcase the application of our method in real-world scenarios, we provide illustrative examples of case studies that examine models falling within the specified class. Lastly, motivated by the prospective application of our method in drug development, we implement the proposed procedures to estimate shelf life of a drug product through tolerance intervals and unbalanced stability data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Using the synthetic control method to determine the impact of state-level mask mandates on COVID-19 fatality rates.
- Author
-
Gius, Mark
- Subjects
DEATH rate ,MASK laws ,COVID-19 ,RANDOM effects model ,COVID-19 vaccines - Abstract
The purpose of the present study is to determine if state-level mask mandates significantly reduced COVID-19 fatality rates. Using weekly data for the period 22 January 2020 to 1 February 2022 and a synthetic control method, results indicated that mask mandates were associated with reductions in COVID-19 fatalities. In both California and Washington, mask mandates were associated with a reduction in COVID-19 fatality rates, especially after COVID-19 vaccines became readily available in April of 2021. In Oregon, there was no statistically significant relationship between mask mandates and COVID-19 fatality rates. To test the robustness of these results, a random effects model was also estimated, and it was also found that mask mandates were associated with a significant reduction in COVID-19 fatality rates. Findings also indicated that states with larger elderly populations and larger populations of African-Americans had significantly higher COVID-19 fatality rates. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Are your random effects normal? A simulation study of methods for estimating whether subjects or items come from more than one population by examining the distribution of random effects in mixed-effects logistic regression.
- Author
-
Houghton, Zachary N. and Kapatsinski, Vsevolod
- Subjects
- *
RANDOM effects model , *DISTRIBUTION (Probability theory) , *LOGISTIC regression analysis , *REGRESSION analysis , *STATISTICAL significance , *INDIVIDUAL differences - Abstract
With mixed-effects regression models becoming a mainstream tool for every psycholinguist, there has become an increasing need to understand them more fully. In the last decade, most work on mixed-effects models in psycholinguistics has focused on properly specifying the random-effects structure to minimize error in evaluating the statistical significance of fixed-effects predictors. The present study examines a potential misspecification of random effects that has not been discussed in psycholinguistics: violation of the single-subject-population assumption, in the context of logistic regression. Estimated random-effects distributions in real studies often appear to be bi- or multimodal. However, there is no established way to estimate whether a random-effects distribution corresponds to more than one underlying population, especially in the more common case of a multivariate distribution of random effects. We show that violations of the single-subject-population assumption can usually be detected by assessing the (multivariate) normality of the inferred random-effects structure, unless the data show quasi-separability, i.e., many subjects or items show near-categorical behavior. In the absence of quasi-separability, several clustering methods are successful in determining which group each participant belongs to. The BIC difference between a two-cluster and a one-cluster solution can be used to determine that subjects (or items) do not come from a single population. This then allows the researcher to define and justify a new post hoc variable specifying the groups to which participants or items belong, which can be incorporated into regression analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. The Wiener Process with a Random Non-Monotone Hazard Rate-Based Drift.
- Author
-
Rodríguez-Picón, Luis Alberto, Méndez-González, Luis Carlos, Pérez-Domínguez, Luis Asunción, and Tovanche-Picón, Héctor Eduardo
- Subjects
- *
WIENER processes , *NUMERICAL functions , *STOCHASTIC processes , *NUMERICAL integration , *CRACK propagation (Fracture mechanics) - Abstract
Several variations of stochastic processes have been studied in the literature to obtain reliability estimations of products and systems from degradation data. As the degradation trajectories may have different degradation rates, it is necessary to consider alternatives to characterize their individual behavior. Some stochastic processes have a constant drift parameter, which defines the mean rate of the degradation process. However, for some cases, the mean rate must not be considered as constant, which means that the rate varies in the different stages of the degradation process. This poses an opportunity to study alternative strategies that allow to model this variation in the drift. For this, we consider the Hjorth rate, which is a failure rate that can define different shapes depending on the values of its parameters. In this paper, the integration of this hazard rate with the Wiener process is studied to individually identify the degradation rate of multiple degradation trajectories. Random effects are considered in the model to estimate a parameter of the Hjorth rate for every degradation trajectory, which allows us to identify the type of rate. The reliability functions of the proposed model is obtained through numerical integration as the function results in a complex form. The proposed model is illustrated in two case studies based on a crack propagation and infrared LED datasets. It is found that the proposed approach has better performance for the reliability estimation of products based on information criteria. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Modelling occurrence and quantity of longitudinal semicontinuous data simultaneously with nonparametric unobserved heterogeneity.
- Author
-
Yan, Guohua and Ma, Renjun
- Subjects
- *
BRIEF Symptom Inventory , *POISSON distribution , *LONGITUDINAL method , *INFANTS , *HETEROGENEITY - Abstract
Semicontinuous data frequently occur in longitudinal studies. The popular two‐part modelling approach deals with longitudinal semicontinuous data by analyzing the occurrence of positive values and the intensity of positive values separately; however, this separation may break down the natural sequence of semicontinuous data within a subject and destroy its serial dependence structure. In this article, we introduce a Tweedie compound Poisson mixed model to study the occurrence of positive values and the quantity of the semicontinuous response simultaneously. In our approach, covariate effects on the semicontinuous response are assessed directly. The correlation within a subject and the unobserved heterogeneity are incorporated with serially correlated nonparametric random effects. Our model unifies subject‐specific and population‐averaged interpretations. We illustrate the approach with applications to a Brief Symptom Inventory study and an infants' fluoride intake study. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Investigating the Heterogeneity of "Study Twins".
- Author
-
Röver, Christian and Friede, Tim
- Abstract
Meta‐analyses are commonly performed based on random‐effects models, while in certain cases one might also argue in favor of a common‐effect model. One such case may be given by the example of two "study twins" that are performed according to a common (or at least very similar) protocol. Here we investigate the particular case of meta‐analysis of a pair of studies, for example, summarizing the results of two confirmatory clinical trials in phase III of a clinical development program. Thereby, we focus on the question of to what extent homogeneity or heterogeneity may be discernible and include an empirical investigation of published ("twin") pairs of studies. A pair of estimates from two studies only provide very little evidence of homogeneity or heterogeneity of effects, and ad hoc decision criteria may often be misleading. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Health Care Provider Clustering Using Fusion Penalty in Quasi‐Likelihood.
- Author
-
Liu, Lili, He, Kevin, Wang, Di, Ma, Shujie, Qu, Annie, Luan, Yihui, Miller, J. Philip, Song, Yizhe, and Liu, Lei
- Abstract
There has been growing research interest in developing methodology to evaluate the health care providers' performance with respect to a patient outcome. Random and fixed effects models are traditionally used for such a purpose. We propose a new method, using a fusion penalty to cluster health care providers based on quasi‐likelihood. Without any priori knowledge of grouping information, our method provides a desirable data‐driven approach for automatically clustering health care providers into different groups based on their performance. Further, the quasi‐likelihood is more flexible and robust than the regular likelihood in that no distributional assumption is needed. An efficient alternating direction method of multipliers algorithm is developed to implement the proposed method. We show that the proposed method enjoys the oracle properties; namely, it performs as well as if the true group structure were known in advance. The consistency and asymptotic normality of the estimators are established. Simulation studies and analysis of the national kidney transplant registry data demonstrate the utility and validity of our method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Factor‐Analytic Variance–Covariance Structures for Prediction Into a Target Population of Environments.
- Author
-
Piepho, Hans‐Peter and Williams, Emlyn
- Abstract
Finlay–Wilkinson regression is a popular method for modeling genotype–environment interaction in plant breeding and crop variety testing. When environment is a random factor, this model may be cast as a factor‐analytic variance–covariance structure, implying a regression on random latent environmental variables. This paper reviews such models with a focus on their use in the analysis of multi‐environment trials for the purpose of making predictions in a target population of environments. We investigate the implication of random versus fixed effects assumptions, starting from basic analysis‐of‐variance models, then moving on to factor‐analytic models and considering the transition to models involving observable environmental covariates, which promise to provide more accurate and targeted predictions than models with latent environmental variables. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. G20 Countries and Sustainable Development: Do They Live up to Their Promises on CO 2 Emissions?
- Author
-
Freitas Souza, Rafael, Cal, Henrique Camano Rodrigues, Lima, Fabiano Guasti, Corrêa, Hamilton Luiz, Santos, Francisco Lledo, and Zanin, Rodrigo Bruno
- Subjects
CARBON emissions ,RANDOM effects model ,GROUP of Twenty countries ,MULTILEVEL models ,GREENHOUSE gas mitigation - Abstract
The aim of this study was to analyze and measure idiosyncratic differences in CO
2 emission trends over time and between the different geographical contexts of the G20 signatory countries and to assess whether these countries are fulfilling their carbon emission reduction commitments, as stipulated in the G20 sustainable development agendas. To this end, a multilevel mixed-effects model was used, considering CO2 emissions data from 1950 to 2021 sourced from the World Bank. The research model captured approximately 93.05% of the joint variance in the data and showed (i) a positive relationship between the increase in CO2 emissions and the creation of the G20 [CI90: +0.0080; + 0.1317]; (ii) that every year, CO2 emissions into the atmosphere are increased by an average of 0.0165 [CI95: +0.0009; +0.0321] billion tons by the G20 countries; (iii) that only Germany, France, and the United Kingdom have demonstrated a commitment to CO2 emissions reduction, showing a decreasing rate of CO2 emissions into the atmosphere; and (iv) that there seems to be a mismatch between the speed at which the G20 proposes climate policies and the speed at which these countries emit CO2 . [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
36. Examining social-demographic determinants of bike-sharing station capacity
- Author
-
Boniphace Kutela, Hamza Mashoor Mustafa Bani Khalaf, Meshack Mihayo, Emmanuel Kidando, and Angela E. Kitali
- Subjects
Bike sharing systems ,Docking station capacity ,Random effects ,Negative binomial ,Environmental sciences ,GE1-350 ,Technology - Abstract
This study applied Negative Binomial models to examine the social-demographic determinants of bike-sharing station capacity. It used social-demographic data from Smart Location Database and over 7,000 bike-sharing stations located in the United States. Results revealed that the station's city, households with no cars, the percentage of workers aged 18–64, gas prices, the Caucasian population, gross residential density on unprotected land, intersection density, and low to medium-earning workers are associated with the increase in station capacity. Contrarily, station capacity decreased with vehicle miles traveled and the number of high-earning workers. Findings are crucial to planners and operators in improving bike-sharing operations.
- Published
- 2024
- Full Text
- View/download PDF
37. Earnings management in local governments under a soft control regime
- Author
-
Haugdal, Ane, Kjærland, Frode, Gårseth-Nesbakk, Levi, and Oust, Are
- Published
- 2024
- Full Text
- View/download PDF
38. Growth models for the progress test in Italian dentistry degree programs: Growth models for the progress...
- Author
-
Biscardi, Giulio, Grilli, Leonardo, Rampichini, Carla, Antonucci, Laura, and Crocetta, Corrado
- Published
- 2024
- Full Text
- View/download PDF
39. Modeling of informal employment factors
- Author
-
Yu. A. Metel and O. A. Lepekhin
- Subjects
informal employment ,panel data ,fixed effects ,random effects ,logit model ,rlms hse ,Economics as a science ,HB71-74 - Abstract
Introduction. The problem of reducing the level of informal employment has worsened in the last 3 years against the background of geopolitical crises. A number of experts call the growth of its volumes one of the most important risks for the Russian labor market. Goal. The paper examines the scale and features of informal employment in the Russian labor market and identifies the factors determining the choice of the informal employment sector. Materials and methods. The information base of the study was compiled by RLMS HSE data for the period 2011-2022. In order to exclude a possible bias in the results due to differences in the sectoral structure of employment in the formal and informal sectors, the sample was limited. The authors evaluated a panel logit model to determine the factors influencing the choice between the formal and informal sec-tors. Wage determinants in the informal sector are identified through the analysis of fixed and random effects panel data. Results and discussion. Based on the simulation results, it was found that the choice of the informal sector as the main place of work is strongly influenced by marital status, the importance of social protection measures, and career expectations. At the same time, the factors influencing the level of remuneration in the in-formal sector include the age of the employee, the amount of working time (in hours), and the availability of managerial experience. Conclusion. The problem of reducing the share of informal employment has a great importance both at the micro level and at the level of the whole country. It leads to low collection of taxes to budgets and insurance contributions to state extra-budgetary funds, and to an increase in the number of cases of violation of workers’ labor rights, especially in terms of pay and labor protection.
- Published
- 2024
- Full Text
- View/download PDF
40. Repeated measures in functional logistic regression.
- Author
-
Urbano-Leon, Cristhian Leonardo, Aguilera, Ana María, and Escabias, Manuel
- Subjects
- *
MULTICOLLINEARITY , *LOGISTIC regression analysis , *RANDOM effects model , *REGRESSION analysis , *FUNCTION spaces - Abstract
We present a proposal to extend the functional logistic regression model – which models a binary scalar response variable from a functional predictor – to the case where the functional observations are not independent because the same functional variable is measured in the same individuals in different experimental conditions (repeated measures). The extension is addressed by including a random effect in the model. The functional approach of this model assumes that all functional objects are elements of the same finite-dimensional subspace of the space of square-integrable functions L 2 in the same compact domain allowing the functions to be treated through the basis coefficients on the basis that spans the subspace to which functional objects belong (basis expansion). This methodology usually induces a multicollinearity problem in the multivariate model that emerges, which is solved with the use of the functional principal components of the functional predictor, resulting in a new functional principal component random effects model. The proposal is contextualized through a simulation study that contains three simulation scenarios for four different functional parameters and considering the lack of independence. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Heritability Estimation of Cognitive Phenotypes in the ABCD Study® Using Mixed Models
- Author
-
Smith, Diana M, Loughnan, Robert, Friedman, Naomi P, Parekh, Pravesh, Frei, Oleksandr, Thompson, Wesley K, Andreassen, Ole A, Neale, Michael, Jernigan, Terry L, and Dale, Anders M
- Subjects
Biological Psychology ,Psychology ,Rare Diseases ,Genetics ,Behavioral and Social Science ,Pediatric ,Mental Health ,Mental health ,Phenotype ,Cognition ,Brain ,Research Design ,Polymorphism ,Single Nucleotide ,Models ,Genetic ,Heritability ,Twin studies ,Mixed models ,Height ,Random effects ,Zoology ,Neurosciences ,Genetics & Heredity ,Biomedical and clinical sciences ,Health sciences - Abstract
Twin and family studies have historically aimed to partition phenotypic variance into components corresponding to additive genetic effects (A), common environment (C), and unique environment (E). Here we present the ACE Model and several extensions in the Adolescent Brain Cognitive Development℠ Study (ABCD Study®), employed using the new Fast Efficient Mixed Effects Analysis (FEMA) package. In the twin sub-sample (n = 924; 462 twin pairs), heritability estimates were similar to those reported by prior studies for height (twin heritability = 0.86) and cognition (twin heritability between 0.00 and 0.61), respectively. Incorporating SNP-derived genetic relatedness and using the full ABCD Study® sample (n = 9,742) led to narrower confidence intervals for all parameter estimates. By leveraging the sparse clustering method used by FEMA to handle genetic relatedness only for participants within families, we were able to take advantage of the diverse distribution of genetic relatedness within the ABCD Study® sample.
- Published
- 2023
42. An Assessment Method for the Step-Down Stress Accelerated Degradation Test Considering Random Effects and Detection Errors.
- Author
-
Cui, Jie, Zhao, Heming, and Peng, Zhiling
- Subjects
GAUSSIAN processes ,GAMMA distributions ,STOCHASTIC processes ,GAUSSIAN distribution ,PROJECTILES - Abstract
The step-stress accelerated degradation test (ADT) provides a feasible method for assessing the storage life of high-reliability, long-life products. However, this method results in a slower rate of performance degradation at the beginning of the test, significantly reducing the test efficiency. Therefore, this article proposes an assessment method for the step-down stress ADT that considers random effects and detection errors (SDRD). Firstly, a new Inverse Gaussian (IG) model is proposed. The model introduces the Gamma distribution to characterize the randomness of the product degradation path and uses the normal distribution to describe the detection errors of performance parameters. In addition, to solve the problem that the likelihood function of the IG model is complex and has no explicit expression, the Monte Carlo (MC) method is used to estimate unknown parameters of the model. This approach enhances computational accuracy and efficiency. Finally, to verify the effectiveness of the SDRD method, it is applied to the step-down stress ADT data from a specific missile tank to assess its storage life. Comparing the life assessment results of different methods, the conclusion shows that the SDRD method is more effective for assessing the storage life of high-reliability, long-life products. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Mid-quantile regression for discrete panel data.
- Author
-
Russo, Alfonso, Farcomeni, Alessio, and Geraci, Marco
- Abstract
We propose novel quantile regression methods when the response is discrete and the data come from a longitudinal design. The approach is based on conditional mid-quantiles, which have good theoretical properties even in the presence of ties. Optimization of a ridge-type penalized objective function accommodates for the data dependence. We investigate the performance and pertinence of our methods in a simulation study and an original application to macroprudential policies use in more than one hundred countries over a period of seventeen years. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Measuring the individualization potential of treatment individualization rules: Application to rules built with a new parametric interaction model for parallel-group clinical trials.
- Author
-
Diaz, Francisco J
- Subjects
- *
STRUCTURAL equation modeling , *CROSSOVER trials , *MACULAR edema , *LATENT variables , *INDIVIDUALIZED medicine - Abstract
For personalized medicine, we propose a general method of evaluating the potential performance of an individualized treatment rule in future clinical applications with new patients. We focus on rules that choose the most beneficial treatment for the patient out of two active (nonplacebo) treatments, which the clinician will prescribe regularly to the patient after the decision. We develop a measure of the individualization potential (IP) of a rule. The IP compares the expected effectiveness of the rule in a future clinical individualization setting versus the effectiveness of not trying individualization. We illustrate our evaluation method by explaining how to measure the IP of a useful type of individualized rules calculated through a new parametric interaction model of data from parallel-group clinical trials with continuous responses. Our interaction model implies a structural equation model we use to estimate the rule and its IP. We examine the IP both theoretically and with simulations when the estimated individualized rule is put into practice in new patients. Our individualization approach was superior to outcome-weighted machine learning according to simulations. We also show connections with crossover and N-of-1 trials. As a real data application, we estimate a rule for the individualization of treatments for diabetic macular edema and evaluate its IP. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Semi-Varying Coefficient Panel Data Model with Technical Indicators Predicts Stock Returns in Financial Market.
- Author
-
Hu, Xuemei, Pan, Ying, and Li, Xiang
- Abstract
Accurately predicting stock returns is a conundrum in financial market. Solving this conundrum can bring huge economic benefits for investors and also attract the attention of all circles of people. In this paper the authors combine semi-varying coefficient model with technical analysis and statistical learning, and propose semi-varying coefficient panel data model with individual effects to explore the dynamic relations between the stock returns from five companies: CVX, DFS, EMN, LYB, and MET and five technical indicators: CCI, EMV, MOM, ln ATR, ln RSI as well as closing price (ln CP), combine semi-parametric fixed effects estimator, semi-parametric random effects estimator with the testing procedure to distinguish fixed effects (FE) from random effects (RE), and finally apply the estimated dynamic relations and the testing set to predict stock returns in December 2020 for the five companies. The proposed method can accommodate the varying relationship and the interactive relationship between the different technical indicators, and further enhance the prediction accuracy to stock returns. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. A climate sensitive nonlinear mixed-effects height to crown base model: a study focuses on Phyllostachys pubescens.
- Author
-
Zhou, Xiao, Zhang, Xuan, Li, Zhen, Liu, Liyang, Sharma, Ram P., and Guan, Fengying
- Abstract
Key message: A climate-sensitive height to crown base (HCB) model developed by combining a nonlinear mixed-effects model and dummy variable approach led to higher prediction accuracy of HCB than those without climatic variables for moso bamboo. Height to crown base (HCB) is one of the important variables used in forest growth and yield models, as it is crucial for assessing vitality, competition, growth and development stage, stability, and production efficiency of the individuals. As climate impact is substantial on HCB, its inclusion of any forest model is crucial to make the model climate sensitive. However, existing HCB models do not consider climate impact on Phyllostachys pubescens (moso bamboo) HCB. With data collected from 26 moso bamboo sample plots in Jiangsu and Fujian provinces in China, we used five common HCB functions to develop climate sensitive HCB models. Modeling showed the significant effects of two individual variables (height—H, diameter at breast height—DBH), two stand-level variables (quadratic mean DBH—QMD, canopy density—CD), and two climate variables (extreme maximum temperature—EXT and Hargreaves' climatic moisture deficit—CMD) on HCB. Compared with the basic model, the introduction of covariates (QMD, CD, EXT and CMD), dummy variable (regions), and random effects (block- and sample plot-level random effects) resulted in increased R
2 by 5.01%, 7.13%, 7.14%, and 13.34%, respectively. The logistic model provided better fit statistics than other models we evaluated. Two-level nonlinear mixed-effects (NLME) models significantly improved fit statistics. Response calibration (model localization) with two medium-sized bamboos per sample plot provided the optimal prediction accuracy. This strategy can be considered as a reasonable compromise between the measurement costs and errors for HCB prediction. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
47. Step selection functions with non‐linear and random effects.
- Author
-
Klappstein, Natasha J., Michelot, Théo, Fieberg, John, Pedersen, Eric J., and Mills Flemming, Joanna
- Subjects
HOME range (Animal geography) ,SPLINES ,HABITATS ,REALISM ,ADDITIVES - Abstract
Step selection functions (SSFs) are used to jointly describe animal movement patterns and habitat preferences. Recent work has extended this framework to model inter‐individual differences, account for unexplained structure in animals' space use and capture temporally varying patterns of movement and habitat selection.In this paper, we formulate SSFs with penalised smooths (similar to generalised additive models) to unify new and existing extensions, and conveniently implement the models in the popular, open‐source mgcv R package.We explore non‐linear patterns of movement and habitat selection, and use the equivalence between penalised smoothing splines and random effects to implement individual‐level and spatial random effects. This framework can also be used to fit varying‐coefficient models to account for temporally or spatially heterogeneous patterns of selection (e.g. resulting from behavioural variation), or any other non‐linear interactions between drivers of the animal's movement decisions.We provide the necessary technical details to understand several key special cases of smooths and their implementation in mgcv, showcase the ecological relevance using two illustrative examples and provide R code to facilitate the adoption of these methods. This paper offers a broad overview of how smooth effects can be applied to increase the flexibility and biological realism of SSFs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Enhancing environmental performance: evidence from SAARC Countries.
- Author
-
Bose, Shekar, Hoque, Asadul, and Weber, Olaf
- Subjects
- *
RANDOM effects model , *ENVIRONMENTAL quality , *POPULATION density , *ECONOMIC expansion - Abstract
This paper examines the influence of population density, economic growth, and regulatory quality on the environmental performance of five SAARC countries from 2000 to 2020. To this end, fixed and random effects models are used. Quantitative data on socio-economic and environmental characteristics reveals notable differences across countries. The results show a significant positive and negative impact of the covariates economic growth and population density, respectively, on environmental performance. While the estimated coefficient of the regulatory quality variable was positive, it was statistically insignificant. The results also suggest a significant inter-country dependency and a significant change in environmental performance across countries over time. Based on the results, several strategic approaches are proposed at both individual and multi-country settings. Furthermore, we emphasize the roles of the government, international and regional agencies, private sector, and civil society organizations. Finally, possible extensions of the present paper are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Random effects models of tumour growth for investigating interval breast cancer.
- Author
-
Orsini, Letizia, Czene, Kamila, and Humphreys, Keith
- Subjects
- *
RANDOM effects model , *BREAST cancer , *CANCER cell growth , *EARLY detection of cancer , *MEDICAL screening , *TUMORS - Abstract
In Nordic countries and across Europe, breast cancer screening participation is high. However, a significant number of breast cancer cases are still diagnosed due to symptoms between screening rounds, termed "interval cancers". Radiologists use the interval cancer proportion as a proxy for the screening false negative rate (ie, 1‐sensitivity). Our objective is to enhance our understanding of interval cancers by applying continuous tumour growth models to data from a study involving incident invasive breast cancer cases. Building upon previous findings regarding stationary distributions of tumour size and growth rate distributions in non‐screened populations, we develop an analytical expression for the proportion of interval breast cancer cases among regularly screened women. Our approach avoids relying on estimated background cancer rates. We make specific parametric assumptions concerning tumour growth and detection processes (screening or symptoms), but our framework easily accommodates alternative assumptions. We also show how our developed analytical expression for the proportion of interval breast cancers within a screened population can be incorporated into an approach for fitting tumour growth models to incident case data. We fit a model on 3493 cases diagnosed in Sweden between 2001 and 2008. Our methodology allows us to estimate the distribution of tumour sizes at the most recent screening for interval cancers. Importantly, we find that our model‐based expected incidence of interval breast cancers aligns closely with observed patterns in our study and in a large Nordic screening cohort. Finally, we evaluate the association between screening interval length and the interval cancer proportion. Our analytical expression represents a useful tool for gaining insights into the performance of population‐based breast cancer screening programs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. The value of generalized linear mixed models for data analysis in the plant sciences.
- Author
-
Madden, Laurence V. and Ojiambo, Peter S.
- Subjects
MARGINAL distributions ,BOTANY ,DATA analysis ,DATA modeling ,COMPUTER storage devices ,DISCRETE choice models ,RANDOM effects model - Abstract
Modern data analysis typically involves the fitting of a statistical model to data, which includes estimating the model parameters and their precision (standard errors) and testing hypotheses based on the parameter estimates. Linear mixed models (LMMs) fitted through likelihood methods have been the foundation for data analysis for well over a quarter of a century. These models allow the researcher to simultaneously consider fixed (e.g., treatment) and random (e.g., block and location) effects on the response variables and account for the correlation of observations, when it is assumed that the response variable has a normal distribution. Analysis of variance (ANOVA), which was developed about a century ago, can be considered a special case of the use of an LMM. A wide diversity of experimental and treatment designs, as well as correlations of the response variable, can be handled using these types of models. Many response variables are not normally distributed, of course, such as discrete variables that may or may not be expressed as a percentage (e.g., counts of insects or diseased plants) and continuous variables with asymmetrical distributions (e.g., survival time). As expansions of LMMs, generalized linear mixedmodels (GLMMs) can be used to analyze the data arising from several non-normal statistical distributions, including the discrete binomial, Poisson, and negative binomial, as well as the continuous gamma and beta. A GLMM allows the data analyst to better match the model to the data rather than to force the data to match a specificmodel. The increase in computer memory and processing speed, together with the development of user-friendly software and the progress in statistical theory and methodology, has made it practical for non-statisticians to use GLMMs since the late 2000s. The switch from LMMs to GLMMs is deceptive, however, as there are several major issues that must be thought about or judged when using a GLMM, which are mostly resolved for routine analyses with LMMs. These include the consideration of conditional versus marginal distributions and means, overdispersion (for discrete data), the model-fitting method [e.g., maximum likelihood (integral approximation), restricted pseudo-likelihood, and quasi-likelihood], and the choice of link function to relate the mean to the fixed and random effects. The issues are explained conceptually with different model formulations and subsequently with an example involving the percentage of diseased plants in a field study with wheat, as well as with simulated data, starting with a LMM and transitioning to a GLMM. A brief synopsis of the published GLMM-based analyses in the plant agricultural literature is presented to give readers a sense of the range of applications of this approach to data analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.