144 results on '"Sherri Rose"'
Search Results
102. Finding quantitative trait loci genes with collaborative targeted maximum likelihood learning
- Author
-
Hui Wang, Sherri Rose, and Mark J. van der Laan
- Subjects
Statistics and Probability ,Maximum likelihood ,Computational biology ,Quantitative trait locus ,Article ,Semiparametric model ,Family-based QTL mapping ,Statistics ,Trait ,Quantitative Trait Loci Genes ,Statistics, Probability and Uncertainty ,Association mapping ,Mathematics ,Parametric statistics - Abstract
Quantitative trait loci mapping is focused on identifying the positions and effect of genes underlying an an observed trait. We present a collaborative targeted maximum likelihood estimator in a semi-parametric model using a newly proposed 2-part super learning algorithm to find quantitative trait loci genes in listeria data. Results are compared to the parametric composite interval mapping approach.
- Published
- 2011
103. Effects of PON polymorphisms and haplotypes on molecular phenotype in Mexican-American mothers and children
- Author
-
Lisa F. Barcellos, Brenda Eskenazi, Nina Holland, Karen Huen, Kenneth B. Beckman, and Sherri Rose
- Subjects
Adult ,Linkage disequilibrium ,Epidemiology ,Health, Toxicology and Mutagenesis ,Mothers ,Single-nucleotide polymorphism ,Polymorphism, Single Nucleotide ,Linkage Disequilibrium ,Article ,Cohort Studies ,Young Adult ,Gene Frequency ,INDEL Mutation ,Mexican Americans ,Genetic variation ,Humans ,Longitudinal Studies ,Pesticides ,Child ,Allele frequency ,Genetics (clinical) ,Genetics ,biology ,Aryldialkylphosphatase ,Haplotype ,Esterases ,Infant, Newborn ,Paraoxonase ,PON1 ,Organophosphates ,Minor allele frequency ,Haplotypes ,biology.protein ,Female - Abstract
Paraoxonase 1 (PON1) prevents oxidation of low-density lipoproteins and inactivates toxic oxon derivatives of organophosphate pesticides (OPs). More than 250 SNPs have been previously identified in the PON1 gene, yet studies of PON1 genetic variation focus primarily on a few promoter SNPs (-108, -162) and coding SNPs (192, 55). We sequenced the PON1 gene in 30 subjects from a Mexican-American birth cohort and identified 94 polymorphisms with minor allele frequencies >5%, including several novel variants (six SNPs, one insertion, and two deletions). Variants of the PON1 gene and three SNPs from PON2 and PON3 were genotyped in 700 children and mothers from the same cohort. PON1 phenotype was established using two substrate-specific assays: arylesterase (AREase) and paraoxonase (POase). Twelve PON1 and two PON2 polymorphisms were significantly associated with AREase activity, and 37 polymorphisms with POase activity; however, only nine were not in strong linkage disequilibrium (LD) with either PON1(-108) or PON1(192) (r(2) > 0.20), SNPs with known effects on PON1 quantity and substrate-specific activity. Single tagSNPs PON1(55) and PON1(192) accounted for similar ranges of AREase variation compared to haplotypes comprised of multiple SNPs within their haplotype blocks. However, PON1(55) explained 11-16% of POase activity, while six SNPs in the same haplotype block explained threefold more variance (36-56%). Although LD structure in the PON cluster seems similar between Mexicans and Caucasians, allele frequencies for many polymorphisms differed strikingly. Functional effects of PON genetic variation related to susceptibility to OPs and oxidative stress also differed by age and should be considered in protecting vulnerable subpopulations.
- Published
- 2011
104. Machine Learning for Prediction in Electronic Health Data
- Author
-
Sherri Rose
- Subjects
Information retrieval ,010102 general mathematics ,MEDLINE ,General Medicine ,01 natural sciences ,Health data ,03 medical and health sciences ,0302 clinical medicine ,Informatics ,medicine ,Delirium ,030212 general & internal medicine ,0101 mathematics ,medicine.symptom ,Psychology - Published
- 2018
105. Classifying lung cancer stage from health care claims with a clinical algorithm or a machine-learning approach
- Author
-
Mary Beth Landrum, Nancy L. Keating, Gabriel A. Brooks, Sherri Rose, and Savannah Bergquist
- Subjects
Cancer Research ,medicine.medical_specialty ,business.industry ,Cancer stage ,Cancer ,medicine.disease ,Clinical algorithm ,Oncology ,Health care ,Medicine ,Stage (cooking) ,business ,Intensive care medicine ,Lung cancer - Abstract
6589Background: Cancer stage is a critical determinant of cancer outcomes, however stage is not available in claims-based data sources used for evaluating real-world outcomes. We compare two new me...
- Published
- 2018
106. Prediction of absolute risk of acute graft-versus-host disease following hematopoietic cell transplantation
- Author
-
Katharine C. Hsu, Stephanie J. Lee, Sherri Rose, Stephen R. Spellman, Michael R. Verneris, Sebastien Haneuse, Catherine Lee, Reza Abdi, Hai-Lin Wang, and Katharina Fleischhauer
- Subjects
Oncology ,Physiology ,Medizin ,Cancer Treatment ,lcsh:Medicine ,Graft vs Host Disease ,Disease ,White Blood Cells ,Mathematical and Statistical Techniques ,0302 clinical medicine ,Animal Cells ,Risk Factors ,Medicine and Health Sciences ,Blood and Lymphatic System Procedures ,Public and Occupational Health ,Young adult ,lcsh:Science ,Child ,Bone Marrow Transplantation ,Multidisciplinary ,T Cells ,Applied Mathematics ,Simulation and Modeling ,Hematopoietic Stem Cell Transplantation ,Absolute risk reduction ,Middle Aged ,Body Fluids ,3. Good health ,Blood ,surgical procedures, operative ,Hematologic Neoplasms ,030220 oncology & carcinogenesis ,Physical Sciences ,Female ,Cellular Types ,Anatomy ,Algorithms ,Statistics (Mathematics) ,Research Article ,Adult ,medicine.medical_specialty ,Adolescent ,Immune Cells ,Immunology ,Surgical and Invasive Medical Procedures ,Research and Analysis Methods ,Young Adult ,03 medical and health sciences ,Transplantation Immunology ,Internal medicine ,medicine ,Humans ,Statistical Methods ,Preventive healthcare ,Transplantation ,Blood Cells ,Prophylaxis ,Donor selection ,lcsh:R ,Biology and Life Sciences ,Cell Biology ,Clinical trial ,ROC Curve ,lcsh:Q ,Clinical Immunology ,Preventive Medicine ,Clinical Medicine ,Complication ,Mathematics ,Forecasting ,030215 immunology - Abstract
Allogeneic hematopoietic cell transplantation (HCT) is the treatment of choice for a variety of hematologic malignancies and disorders. Unfortunately, acute graft-versus-host disease (GVHD) is a frequent complication of HCT. While substantial research has identified clinical, genetic and proteomic risk factors for acute GVHD, few studies have sought to develop risk prediction tools that quantify absolute risk. Such tools would be useful for: optimizing donor selection; guiding GVHD prophylaxis, post-transplant treatment and monitoring strategies; and, recruitment of patients into clinical trials. Using data on 9,651 patients who underwent first allogeneic HLA-identical sibling or unrelated donor HCT between 01/1999-12/2011 for treatment of a hematologic malignancy, we developed and evaluated a suite of risk prediction tools for: (i) acute GVHD within 100 days post-transplant and (ii) a composite endpoint of acute GVHD or death within 100 days post-transplant. We considered two sets of inputs: (i) clinical factors that are typically readily-available, included as main effects; and, (ii) main effects combined with a selection of a priori specified two-way interactions. To build the prediction tools we used the super learner, a recently developed ensemble learning statistical framework that combines results from multiple other algorithms/methods to construct a single, optimal prediction tool. Across the final super learner prediction tools, the area-under-the curve (AUC) ranged from 0.613–0.640. Improving the performance of risk prediction tools will likely require extension beyond clinical factors to include biological variables such as genetic and proteomic biomarkers, although the measurement of these factors may currently not be practical in standard clinical settings. OA gold - CA extern
- Published
- 2018
107. The Impact of Exclusion Criteria on a Physician’s Adenoma Detection Rate
- Author
-
Ateev Mehrotra, Robert E. Schoen, Felippe O. Marcondes, Katie Dean, Daniel A. Leffler, Sherri Rose, and Michele I. Morris
- Subjects
Adenoma ,Adult ,Male ,medicine.medical_specialty ,Adolescent ,Colorectal cancer ,Outcome measurements ,MEDLINE ,Colonoscopy ,Article ,Young Adult ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Aged ,Quality Indicators, Health Care ,Aged, 80 and over ,medicine.diagnostic_test ,business.industry ,Patient Selection ,Gastroenterology ,Middle Aged ,medicine.disease ,Confidence interval ,Surgery ,Private practice ,Emergency medicine ,Female ,Clinical Competence ,Detection rate ,business ,Colorectal Neoplasms - Abstract
Background The adenoma detection rate (ADR) is a validated and widely used measure of colonoscopy quality. There is uncertainty in the published literature as to which colonoscopy examinations should be excluded when measuring a physician's ADR. Objective To examine the impact of varying the colonoscopy exclusion criteria on physician ADR. Design We applied different exclusion criteria used in 30 previous studies to a dataset of endoscopy and pathology reports. Under each exclusion criterion, we calculated physician ADR. Setting A private practice colonoscopy center affiliated with the University of Illinois College of Medicine. Patients Data on 20,040 colonoscopy examinations performed by 11 gastroenterologists from July 2009 to May 2013 and associated pathology notes. Main Outcome Measurements ADRs across all colonoscopy examinations, each physician's ADR, and ADR ranking. Results There were 28 different exclusion criteria used when measuring the ADR. Each study used a different combination of these exclusion criteria. The proportion of all colonoscopy examinations in the dataset excluded under these combinations of exclusion criteria ranged from 0% to 92.2%. The mean ADR across all colonoscopy examinations was 39.1%. The change in mean ADR after applying the 28 exclusion criteria ranged from −5.5 to +3.0 percentage points. However, the exclusion criteria affected each physician's ADR relatively equally, and therefore physicians' rankings via the ADR were stable. Limitations ADR assessment was limited to a single private endoscopy center. Conclusion There is wide variation in the exclusion criteria used when measuring the ADR. Although these exclusion criteria can affect overall ADRs, the relative rankings of physicians by ADR were stable. A consensus definition of which exclusion criteria are applied when measuring ADR is needed.
- Published
- 2015
108. Targeted Maximum Likelihood Estimation for Causal Inference in Observational Studies
- Author
-
Sherri Rose and Megan S. Schuler
- Subjects
Epidemiology ,Average treatment effect ,Computer science ,Context (language use) ,Machine learning ,computer.software_genre ,01 natural sciences ,Machine Learning ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Bias ,Robustness (computer science) ,Humans ,Computer Simulation ,030212 general & internal medicine ,0101 mathematics ,Propensity Score ,Exercise ,Parametric statistics ,Likelihood Functions ,business.industry ,Depression ,Inverse probability weighting ,Confounding Factors, Epidemiologic ,Regression ,Causality ,Observational Studies as Topic ,Causal inference ,Data Interpretation, Statistical ,Epidemiologic Research Design ,Propensity score matching ,Artificial intelligence ,business ,computer - Abstract
Estimation of causal effects using observational data continues to grow in popularity in the epidemiologic literature. While many applications of causal effect estimation use propensity score methods or G-computation, targeted maximum likelihood estimation (TMLE) is a well-established alternative method with desirable statistical properties. TMLE is a doubly robust maximum-likelihood-based approach that includes a secondary "targeting" step that optimizes the bias-variance tradeoff for the target parameter. Under standard causal assumptions, estimates can be interpreted as causal effects. Because TMLE has not been as widely implemented in epidemiologic research, we aim to provide an accessible presentation of TMLE for applied researchers. We give step-by-step instructions for using TMLE to estimate the average treatment effect in the context of an observational study. We discuss conceptual similarities and differences between TMLE and 2 common estimation approaches (G-computation and inverse probability weighting) and present findings on their relative performance using simulated data. Our simulation study compares methods under parametric regression misspecification; our results highlight TMLE's property of double robustness. Additionally, we discuss best practices for TMLE implementation, particularly the use of ensembled machine learning algorithms. Our simulation study demonstrates all methods using super learning, highlighting that incorporation of machine learning may outperform parametric regression in observational data settings.
- Published
- 2015
109. Medical schools in fragile and conflict-affected states: A global, country-level analysis
- Author
-
Sherri Rose, Farrah J. Mateen, and Erica McKenzie
- Subjects
Country level ,Political science ,Development economics ,General Medicine ,Infectious and parasitic diseases ,RC109-216 ,Public aspects of medicine ,RA1-1270 - Published
- 2015
- Full Text
- View/download PDF
110. Su1634 Serrated Polyp Detection Is Related to Specialty Training and Colonoscopy Volume: Results From a Large Multicenter Colonoscopy Quality Study
- Author
-
Michele I. Morris, Ateev Mehrotra, Rebecca A. Gourevitch, Sherri Rose, Robert E. Schoen, David Carrell, Julia B. Greer, Daniel A. Leffler, and Seth D. Crockett
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,General surgery ,media_common.quotation_subject ,Serrated polyp ,Gastroenterology ,Specialty ,Colonoscopy ,medicine ,Radiology, Nuclear Medicine and imaging ,Quality (business) ,business ,media_common ,Volume (compression) - Published
- 2017
111. How well can post-traumatic stress disorder be predicted from pre-trauma risk factors? An exploratory study in the WHO World Mental Health Surveys
- Author
-
Alan M. Zaslavsky, Sherri Rose, Arieh Y. Shalev, Paul E. Stang, Anthony J. Rosellini, Dan J. Stein, Silvia Florescu, Derrick Silove, Koen Demyttenaere, Kate M. Scott, Samuel A. McLean, Ayelet Meron Ruscio, Ronald C. Kessler, Steven G. Heeringa, Matthias C. Angermeyer, Yolanda Torres, Elie G. Karam, Sam Murphy, María Elena Medina-Mora, Katie A. McLaughlin, Giovanni de Girolamo, Josep Maria Haro, Peter de Jonge, Sing Lee, Israel Liberzon, Maria Petukhova, B. E. Pennell, Norito Kawakami, Victoria Shahly, Maria Carmen Viana, Viviane Kovess-Masfety, Marina Piazza, Jose Posada-Villa, Eric Hill, Hristo Hinkov, Karestan C. Koenen, Fernando Navarro-Mateu, Oye Gureje, José Miguel Caldas de Almeida, Evelyn J. Bromet, Department of Psychiatry and Mental Health, Faculty of Health Sciences, Interdisciplinary Centre Psychopathology and Emotion regulation (ICPE), and Life Course Epidemiology (LCE)
- Subjects
random forests ,medicine.medical_specialty ,SYMPTOMS ,Settore MED/25 - PSCHIATRIA ,CIDI ,purl.org/pe-repo/ocde/ford#3.02.24 [https] ,Exploratory research ,Psychological intervention ,Sample (statistics) ,ridge regression ,Medicine ,EXPOSURE ,penalized regression ,Set (psychology) ,Psychiatry ,VERSION ,METAANALYSIS ,Penalized regression ,Post-traumatic stress disorder ,business.industry ,Traumatic stress ,PTSD ,Research Reports ,ADULTS ,Mental health ,PREVENTION ,Psychiatry and Mental health ,machine learning ,Pshychiatric Mental Health ,business ,predictive modeling ,INTERVENTIONS - Abstract
Post-traumatic stress disorder (PTSD) should be one of the most preventable mental disorders, since many people exposed to traumatic experiences (TEs) could be targeted in first response settings in the immediate aftermath of exposure for preventive intervention. However, these interventions are costly and the proportion of TE-exposed people who develop PTSD is small. To be cost-effective, risk prediction rules are needed to target high-risk people in the immediate aftermath of a TE. Although a number of studies have been carried out to examine prospective predictors of PTSD among people recently exposed to TEs, most were either small or focused on a narrow sample, making it unclear how well PTSD can be predicted in the total population of people exposed to TEs. The current report investigates this issue in a large sample based on the World Health Organization (WHO)'s World Mental Health Surveys. Retrospective reports were obtained on the predictors of PTSD associated with 47,466 TE exposures in representative community surveys carried out in 24 countries. Machine learning methods (random forests, penalized regression, super learner) were used to develop a model predicting PTSD from information about TE type, socio-demographics, and prior histories of cumulative TE exposure and DSM-IV disorders. DSM-IV PTSD prevalence was 4.0% across the 47,466 TE exposures. 95.6% of these PTSD cases were associated with the 10.0% of exposures (i.e., 4,747) classified by machine learning algorithm as having highest predicted PTSD risk. The 47,466 exposures were divided into 20 ventiles (20 groups of equal size) ranked by predicted PTSD risk. PTSD occurred after 56.3% of the TEs in the highest-risk ventile, 20.0% of the TEs in the second highest ventile, and 0.0-1.3% of the TEs in the 18 remaining ventiles. These patterns of differential risk were quite stable across demographic-geographic sub-samples. These results demonstrate that a sensitive risk algorithm can be created using data collected in the immediate aftermath of TE exposure to target people at highest risk of PTSD. However, validation of the algorithm is needed in prospective samples, and additional work is warranted to refine the algorithm both in terms of determining a minimum required predictor set and developing a practical administration and scoring protocol that can be used in routine clinical practice.
- Published
- 2014
- Full Text
- View/download PDF
112. 2014 Articles of the Year, Reviewers of the Year, and Figure of the Year
- Author
-
Daniel Altmann, Sherri Rose, John D. Beard, Roberta B. Ness, Dustin T. Duncan, Chanelle J. Howe, Ruth C. Travis, Alison S Rustagi, Ashley I. Naimi, and Orianne Dumas
- Subjects
History ,Epidemiology - Published
- 2015
113. Targeted Learning : Causal Inference for Observational and Experimental Data
- Author
-
Mark J. van der Laan, Sherri Rose, Mark J. van der Laan, and Sherri Rose
- Subjects
- Inference, Mathematical statistics
- Abstract
The statistics profession is at a unique point in history. The need for valid statistical tools is greater than ever; data sets are massive, often measuring hundreds of thousands of measurements for a single subject. The field is ready to move towards clear objective benchmarks under which tools can be evaluated. Targeted learning allows (1) the full generalization and utilization of cross-validation as an estimator selection tool so that the subjective choices made by humans are now made by the machine, and (2) targeting the fitting of the probability distribution of the data toward the target parameter representing the scientific question of interest. This book is aimed at both statisticians and applied researchers interested in causal inference and general effect estimation for observational and experimental data. Part I is an accessible introduction to super learning and the targeted maximum likelihood estimator, including related concepts necessary to understand and apply these methods. Parts II-IX handle complex data structures and topics applied researchers will immediately recognize from their own research, including time-to-event outcomes, direct and indirect effects, positivity violations, case-control studies, censored data, longitudinal data, and genomic studies.
- Published
- 2011
114. A targeted maximum likelihood estimator for two-stage designs
- Author
-
Mark J. van der Laan and Sherri Rose
- Subjects
Statistics and Probability ,Likelihood Functions ,Models, Statistical ,Estimation theory ,Estimator ,Sampling (statistics) ,General Medicine ,Estimating equations ,Maximum likelihood sequence estimation ,Missing data ,Censoring (statistics) ,Article ,Sample size determination ,Research Design ,Case-Control Studies ,Sample Size ,Statistics ,Humans ,Statistics, Probability and Uncertainty ,Mathematics - Abstract
We consider two-stage sampling designs, including so-called nested case control studies, where one takes a random sample from a target population and completes measurements on each subject in the first stage. The second stage involves drawing a subsample from the original sample, collecting additional data on the subsample. This data structure can be viewed as a missing data structure on the full-data structure collected in the second-stage of the study. Methods for analyzing two-stage designs include parametric maximum likelihood estimation and estimating equation methodology. We propose an inverse probability of censoring weighted targeted maximum likelihood estimator (IPCW-TMLE) in two-stage sampling designs and present simulation studies featuring this estimator.
- Published
- 2011
115. Implementation of G-Computation on a Simulated Data Set: Demonstration of a Causal Inference Technique
- Author
-
Kathleen M. Mortimer, Jonathan M. Snowden, and Sherri Rose
- Subjects
Protocol (science) ,Interpretation (logic) ,Models, Statistical ,Epidemiology ,business.industry ,Practice of Epidemiology ,Marginal structural model ,Inference ,Confounding Factors, Epidemiologic ,Machine learning ,computer.software_genre ,Causality ,Set (abstract data type) ,Data set ,Causal inference ,Epidemiologic Research Design ,Medicine ,Humans ,Regression Analysis ,Computer Simulation ,Artificial intelligence ,business ,computer - Abstract
The growing body of work in the epidemiology literature focused on G-computation includes theoretical explanations of the method but very few simulations or examples of application. The small number of G-computation analyses in the epidemiology literature relative to other causal inference approaches may be partially due to a lack of didactic explanations of the method targeted toward an epidemiology audience. The authors provide a step-by-step demonstration of G-computation that is intended to familiarize the reader with this procedure. The authors simulate a data set and then demonstrate both G-computation and traditional regression to draw connections and illustrate contrasts between their implementation and interpretation relative to the truth of the simulation protocol. A marginal structural model is used for effect estimation in the G-computation example. The authors conclude by answering a series of questions to emphasize the key characteristics of causal inference techniques and the G-computation procedure in particular.
- Published
- 2011
116. Why Match? Matched Case-Control Studies
- Author
-
Mark J. van der Laan and Sherri Rose
- Subjects
Variable (computer science) ,Matching (statistics) ,education.field_of_study ,Computer science ,Clinical study design ,Statistics ,Population ,Confounding ,Statistical model ,Matched Case-Control Studies ,education ,Parametric statistics - Abstract
Individually matched case-control study designs are common in public health and medicine, and conditional logistic regression in a parametric statistical model is the tool most commonly used to analyze these studies. In an individually matched case-control study, the population of interest is identified, and cases are randomly sampled. Each of these cases is then matched to one or more controls based on a variable (or variables) believed to be a confounder. The main potential benefit of matching in case-control studies is a gain in efficiency, not the elimination of confounding. Therefore, when are these study designs truly beneficial?
- Published
- 2011
117. Foundations of TMLE
- Author
-
Sherri Rose and Mark J. van der Laan
- Subjects
Delta method ,Distribution (mathematics) ,Consistency (statistics) ,Applied mathematics ,Estimator ,Parameter space ,Empirical probability ,Empirical process ,Central limit theorem ,Mathematics - Abstract
An estimator of a parameter is a mapping from the data set to the parameter space. Estimators that are empirical means of a function of the unit data structure are asymptotically consistent and normally distributed due to the CLT. Such estimators are called linear in the empirical probability distribution. Most estimators are not linear, but many are approximately linear in the sense that they are linear up to a negligible (in probability) remainder term. One states that the estimator is asymptotically linear, and the relevant function of the unit data structure, centered to have mean zero, is called the influence curve of the estimator. How does one prove that an estimator is asymptotically linear? One key step is to realize that an estimator is a mapping from a possibly very large collection of empirical means of functions of the unit data structure into the parameter space. Such a collection of empirical means is called an empirical process whose behavior with respect to uniform consistency and the uniform CLT is established in empirical process theory. In this section we present succinctly that (1) a uniform central limit theorem for the vector of empirical means, combined with (2) differentiability of the estimator as a mapping from the vector of empirical means into the parameter space yields the desired asymptotic linearity. This method for establishing the asymptotic linearity and normality of the estimator is called the functional delta method (van der Vaart and Wellner 1996; Gill 1989).
- Published
- 2011
118. The Open Problem
- Author
-
Mark J. van der Laan and Sherri Rose
- Subjects
medicine.medical_specialty ,genetic structures ,Heart disease ,business.industry ,Public health ,medicine.medical_treatment ,Osteoporosis ,Hormone replacement therapy (menopause) ,medicine.disease ,eye diseases ,Increased risk ,Breast cancer ,Family medicine ,medicine ,sense organs ,Medical prescription ,business ,Stroke - Abstract
The debate over hormone replacement therapy (HRT) has been one of the biggest health discussions in recent history. Professional groups and nonprofits, such as the American College of Physicians and the American Heart Association, gave HRT their stamp of approval 15 years ago. Studies indicated that HRT was protective against osteoporosis and heart disease. HRT became big business, with millions upon millions of prescriptions filled each year. However, in 1998, the Heart and Estrogen-Progestin Replacement Study demonstrated increased risk of heart attack among women with heart disease taking HRT, and in 2002 the Women’s Health Initiative showed increased risk for breast cancer, heart disease, and stroke, among other ailments, for women on HRT. Why were there inconsistencies in the study results?
- Published
- 2011
119. Finding Quantitative Trait Loci Genes
- Author
-
Sherri Rose, Mark J. van der Laan, and Hui Wang
- Subjects
Genetics ,Inbred strain ,Polygene ,Genetic marker ,Expression quantitative trait loci ,Trait ,food and beverages ,Quantitative trait locus ,Biology ,Genome ,Genetic architecture - Abstract
The goal of quantitative trait loci (QTL) mapping is to identify genes underlying an observed trait in the genome using genetic markers. In experimental organisms, the QTL mapping experiment usually involves crossing two inbred lines with substantial differences in a trait, and then scoring the trait in the segregating progeny. A series of markers along the genome is genotyped in the segregating progeny, and associations between the trait and the QTL can be evaluated using the marker information. Of primary interest are the positions and effect sizes of QTL genes.
- Published
- 2011
120. Super Learning
- Author
-
Eric C. Polley, Sherri Rose, and Mark J. van der Laan
- Published
- 2011
121. Nested Case-Control Risk Score Prediction
- Author
-
Bruce Fireman, Sherri Rose, and Mark J. van der Laan
- Subjects
medicine.medical_specialty ,Framingham Risk Score ,business.industry ,Epidemiology ,Nested case-control study ,Statistics ,medicine ,Psychological intervention ,Estimator ,business ,Outcome (game theory) ,Regression ,Parametric statistics - Abstract
Risk scores are calculated to identify those patients at the highest level of risk for an outcome. In some cases, interventions are implemented for patients at high risk. Standard practice for risk score prediction relies heavily on parametric regression. Generating a good estimator of the function of interest using parametric regression can be a significant challenge. As discussed in Chap. 3, high-dimensional data are increasingly common in epidemiology, and researchers may have dozens, hundreds, or thousands of potential predictors that are possibly related to the outcome.
- Published
- 2011
122. Introduction to TMLE
- Author
-
Mark J. van der Laan and Sherri Rose
- Subjects
Estimation ,Computer science ,Maximum likelihood ,Causal effect ,Econometrics ,Probability distribution - Abstract
This is the second chapter in our text to deal with estimation.We started by defining the research question. This included our data, model for the probability distribution that generated the data, and the target parameter of the probability distribution of the data. We then presented the estimation of prediction functions using super learning. This leads us to the estimation of causal effects using the TMLE. This chapter introduces TMLE, and a deeper understanding of this methodology is provided in Chap. 5. Note that we use the abbreviation TMLE for targeted maximum likelihood estimation and the targeted maximum likelihood estimator. Later in this text, we discuss targeted minimum loss-based estimation, which can also be abbreviated TMLE.
- Published
- 2011
123. Targeted Learning
- Author
-
Mark J. van der Laan and Sherri Rose
- Published
- 2011
124. Independent Case-Control Studies
- Author
-
Mark J. van der Laan and Sherri Rose
- Subjects
medicine.medical_specialty ,Disease onset ,Potential risk ,business.industry ,Clinical study design ,Public health ,Family medicine ,medicine ,Case-control study ,Sample (statistics) ,Disease ,business ,Medical research - Abstract
Case-control study designs are frequently used in public health and medical research to assess potential risk factors for disease. These study designs are particularly attractive to investigators researching rare diseases, as they are able to sample known cases of disease vs. following a large number of subjects and waiting for disease onset in a relatively small number of individuals.
- Published
- 2011
125. Why TMLE?
- Author
-
Sherri Rose and Mark J. van der Laan
- Published
- 2011
126. Introduction to R Code Implementation
- Author
-
Mark J. van der Laan and Sherri Rose
- Subjects
Computer science ,Programming language ,Research questions ,Data structure ,computer.software_genre ,computer ,Coding (social sciences) - Abstract
This appendix includes a brief introduction to the implementation of super learning and the TMLE in R. Packages and supplementary code are posted online at http://www.targetedlearningbook.com. We conclude with a few coding guides for data structures and research questions presented in Parts II–IX. The book’sWeb site will be a continually updated resource for new code, demonstrations, and packages.
- Published
- 2011
127. Understanding TMLE
- Author
-
Sherri Rose and Mark J. van der Laan
- Published
- 2011
128. Rose and van der Laan Respond to 'Some Advantages of the Relative Excess Risk due to Interaction'
- Author
-
Sherri Rose and Mark J. van der Laan
- Subjects
Epidemiology ,Computer science ,Parametric model ,Econometrics ,Nonparametric statistics ,Estimator ,Statistical model ,Estimating equations ,Semiparametric regression ,Semiparametric model ,Parametric statistics - Abstract
We appreciate VanderWeele and Vansteelandt's perspective (1) on our article (2). Our commentary largely focused on a discussion of marginal estimators for case-control study designs not mentioned in VanderWeele and Vansteelandt's original article (3). In our presentation (2), we highlighted the case-control-weighted targeted maximum likelihood estimator (TMLE) (4–7) and Robins' “approximately valid” inverse-probability-weighted estimator for case-control data (8). We appreciate VanderWeele and Vansteelandt's continued dialogue on methods for case-control study designs, as well as their inclusion of a new double robust estimator in their commentary (1), since there is a strong need for more work in this area. In this response, we precisely frame the efficiency properties of the case-control-weighted TMLE, which have been discussed elsewhere (2, 4–7) but were not completely presented in VanderWeele and Vansteelandt's commentary and Web Appendix (available at http://aje.oxfordjournals.org/) (1) or in our original commentary (2). We also emphasize the need for flexible nonparametric estimators that incorporate machine learning in the modern “big data” era of epidemiology in large databases. When defining our research question, we must be explicit about the model we are specifying. We wish to consider either a nonparametric model or a semiparametric model, thereby making fewer restrictive assumptions on our data-generating distribution than when imposing a parametric model. We are not limited to nonparametric statistical models, and we can make additional assumptions based on investigator knowledge in a semiparametric model. The efficiency claims made for the case-control-weighted TMLE are based on this nonparametric or semiparametric model (4–7). Before comparing the efficiency of estimators, it is important to agree on the model. Comparing parametric model efficiency with nonparametric or semiparametric model efficiency is not an apt comparison. Our case-control weighting effectively maps a function of the full-data sampled observations into a function for the biased case-control sampled observations. It has been demonstrated that case-control weighting of the efficient TMLE for the full-data model leads to an efficient TMLE for the case-control model. The required regularity conditions have been described previously (5). The case-control-weighted TMLE with known prevalence probability is consistent if either the outcome regression or the exposure mechanism is consistently estimated, and it is efficient if both are consistently estimated. Notably, the estimator is not defined as the solution to an estimating equation, although it does solve the efficient influence curve estimating equation. We also wish to underscore that using a nonparametric or semiparametric model is not a limitation; in fact, we consider it a compelling advantage. Especially when considering the advent of large data sets in epidemiology, researchers are increasingly interested in more flexible procedures that do not rely on restrictive parametric models. Since the goal is to have a statistical model that contains the true data distribution, assuming a nonparametric or semiparametric model may be preferable, as will using an estimator that allows for the incorporation of machine learning or ensembling methods (9, 10). This avoids the problems of 1) having more parameters than observations in a parametric model, 2) committing to a specific functional form of the data, and 3) attempting to represent complex relationships with a parametric regression. Integrating machine learning methods and causal inference is a burgeoning field in statistical science, one with promising potential for new methodological innovation in epidemiology. Novel robust estimators for case-control studies are an important area of methodological work, and we look forward to future contributions from VanderWeele, Vansteelandt, and other investigators.
- Published
- 2014
129. Why Match? Investigating Matched Case-Control Study Designs with Causal Effect Estimation*
- Author
-
Mark J. van der Laan and Sherri Rose
- Subjects
Statistics and Probability ,Research design ,Estimation ,Matching (statistics) ,Likelihood Functions ,Models, Statistical ,Clinical study design ,Confounding ,Marginal structural model ,Probability and statistics ,General Medicine ,Causality ,Article ,Research Design ,Case-Control Studies ,Statistics ,Econometrics ,Humans ,Statistics, Probability and Uncertainty ,Mathematics - Abstract
Matched case-control study designs are commonly implemented in the field of public health. While matching is intended to eliminate confounding, the main potential benefit of matching in case-control studies is a gain in efficiency. Methods for analyzing matched case-control studies have focused on utilizing conditional logistic regression models that provide conditional and not causal estimates of the odds ratio. This article investigates the use of case-control weighted targeted maximum likelihood estimation to obtain marginal causal effects in matched case-control study designs. We compare the use of case-control weighted targeted maximum likelihood estimation in matched and unmatched designs in an effort to explore which design yields the most information about the marginal causal effect. The procedures require knowledge of certain prevalence probabilities and were previously described by van der Laan (2008). In many practical situations where a causal effect is the parameter of interest, researchers may be better served using an unmatched design.
- Published
- 2009
130. Simple Optimal Weighting of Cases and Controls in Case-Control Studies
- Author
-
Mark J. van der Laan and Sherri Rose
- Subjects
Statistics and Probability ,Population ,Marginal structural model ,Sample (statistics) ,Biostatistics ,Logistic regression ,Article ,Risk Factors ,Statistics ,Odds Ratio ,Prevalence ,Econometrics ,Humans ,education ,Probability ,Mathematics ,Likelihood Functions ,education.field_of_study ,Models, Statistical ,Absolute risk reduction ,General Medicine ,Odds ratio ,Weighting ,Logistic Models ,Case-Control Studies ,Relative risk ,Statistics, Probability and Uncertainty - Abstract
Researchers of uncommon diseases are often interested in assessing potential risk factors. Given the low incidence of disease, these studies are frequently case-control in design. Such a design allows a sufficient number of cases to be obtained without extensive sampling and can increase efficiency; however, these case-control samples are then biased since the proportion of cases in the sample is not the same as the population of interest. Methods for analyzing case-control studies have focused on utilizing logistic regression models that provide conditional and not causal estimates of the odds ratio. This article will demonstrate the use of the prevalence probability and case-control weighted targeted maximum likelihood estimation (MLE), as described by van der Laan (2008), in order to obtain causal estimates of the parameters of interest (risk difference, relative risk, and odds ratio). It is meant to be used as a guide for researchers, with step-by-step directions to implement this methodology. We will also present simulation studies that show the improved efficiency of the case-control weighted targeted MLE compared to other techniques.
- Published
- 2008
131. Sa1455 The Impact of Exclusion Criteria on a Physician's Adenoma Detection Rate
- Author
-
Michele I. Morris, Felippe O. Marcondes, Sherri Rose, Daniel A. Leffler, Katie Dean, Ateev Mehrotra, and Robert E. Schoen
- Subjects
medicine.medical_specialty ,Adenoma ,business.industry ,General surgery ,Gastroenterology ,medicine ,Radiology, Nuclear Medicine and imaging ,Detection rate ,medicine.disease ,business - Published
- 2015
132. Predicting Suicides After Psychiatric Hospitalization in US Army Soldiers
- Author
-
Matthew K. Nock, Lisa J. Colpe, Murray B. Stein, Junlong Li, Alan M. Zaslavsky, Michael Schoenbaum, Nancy A. Sampson, Maria Petukhova, Lisa Lewandowski-Romps, Robert J. Ursano, Anthony J. Rosellini, Tianxi Cai, Steven G. Heeringa, Christopher G. Ivany, Millard Brown, James A. Naifeh, Sherri Rose, Christopher H. Warner, Michael L. Gruber, Evelyn J. Bromet, Simon Wessely, Stephen E. Gilman, Amy Millikan-Bell, Carol S. Fullerton, Ronald C. Kessler, and Kenneth L. Cox
- Subjects
medicine.medical_specialty ,business.industry ,Poison control ,Odds ratio ,Suicide prevention ,Occupational safety and health ,Psychiatry and Mental health ,Injury prevention ,medicine ,Adverse effect ,Risk assessment ,Psychiatry ,business ,Psychopathology - Abstract
included sociodemographics (male sex [odds ratio (OR), 7.9; 95% CI, 1.9-32.6] and late age of enlistment [OR, 1.9; 95% CI, 1.0-3.5]), criminal offenses (verbal violence [OR, 2.2; 95% CI, 1.2-4.0] and weapons possession [OR, 5.6; 95% CI, 1.7-18.3]), prior suicidality [OR, 2.9; 95% CI, 1.7-4.9], aspects of prior psychiatric inpatient and outpatient treatment (eg, number of antidepressant prescriptions filled in the past 12 months [OR, 1.3; 95% CI, 1.1-1.7]), and disorders diagnosed during the focal hospitalizations (eg, nonaffective psychosis [OR, 2.9; 95% CI, 1.2-7.0]). A total of 52.9% of posthospitalization suicides occurred after the 5% of hospitalizations with highest predicted suicide risk (3824.1 suicides per 100 000 person-years). These highest-risk hospitalizations also accounted for significantly elevated proportions of several other adverse posthospitalization outcomes (unintentional injury deaths, suicide attempts, and subsequent hospitalizations).
- Published
- 2015
133. Modelling the network of cell cycle transcription factors in the yeast Saccharomyces cerevisiae
- Author
-
Matteo Pellegrini, Sherri Rose, Niels Grønbech-Jensen, David R. Haynor, and Shawn J. Cokus
- Subjects
Saccharomyces cerevisiae Proteins ,Transcription, Genetic ,Saccharomyces cerevisiae ,Cell Cycle Proteins ,lcsh:Computer applications to medicine. Medical informatics ,Biochemistry ,Models, Biological ,Structural Biology ,Gene Expression Regulation, Fungal ,Transcriptional regulation ,Computer Simulation ,Cell Cycle Protein ,lcsh:QH301-705.5 ,Molecular Biology ,Gene ,Transcription factor ,biology ,Applied Mathematics ,Cell cycle ,biology.organism_classification ,Computer Science Applications ,Cell biology ,lcsh:Biology (General) ,lcsh:R858-859.7 ,DNA microarray ,Signal transduction ,Signal Transduction ,Transcription Factors ,Research Article - Abstract
Background Reverse-engineering regulatory networks is one of the central challenges for computational biology. Many techniques have been developed to accomplish this by utilizing transcription factor binding data in conjunction with expression data. Of these approaches, several have focused on the reconstruction of the cell cycle regulatory network of Saccharomyces cerevisiae. The emphasis of these studies has been to model the relationships between transcription factors and their target genes. In contrast, here we focus on reverse-engineering the network of relationships among transcription factors that regulate the cell cycle in S. cerevisiae. Results We have developed a technique to reverse-engineer networks of the time-dependent activities of transcription factors that regulate the cell cycle in S. cerevisiae. The model utilizes linear regression to first estimate the activities of transcription factors from expression time series and genome-wide transcription factor binding data. We then use least squares to construct a model of the time evolution of the activities. We validate our approach in two ways: by demonstrating that it accurately models expression data and by demonstrating that our reconstructed model is similar to previously-published models of transcriptional regulation of the cell cycle. Conclusion Our regression-based approach allows us to build a general model of transcriptional regulation of the yeast cell cycle that includes additional factors and couplings not reported in previously-published models. Our model could serve as a starting point for targeted experiments that test the predicted interactions. In the future, we plan to apply our technique to reverse-engineer other systems where both genome-wide time series expression data and transcription factor binding data are available.
- Published
- 2006
134. Big Data and the Future
- Author
-
Sherri Rose
- Subjects
Statistics and Probability ,Rose (mathematics) ,History ,ComputingMilieux_THECOMPUTINGPROFESSION ,Operations research ,business.industry ,Big data ,ComputingMilieux_COMPUTERSANDEDUCATION ,Economic history ,Software_PROGRAMMINGTECHNIQUES ,business ,ComputingMilieux_MISCELLANEOUS - Abstract
At the beginning of her career Sherri Rose discusses big data and stands amazed at its potential.
- Published
- 2012
135. Rose et al. Respond to 'G-Computation and Standardization in Epidemiology'
- Author
-
Kathleen M. Mortimer, Jonathan M. Snowden, and Sherri Rose
- Subjects
Rose (mathematics) ,Standardization ,Epidemiology ,Computer science ,G computation ,Library science - Published
- 2011
136. Using human serum albumin to perform adductomics in populations of interest
- Author
-
Haodong Li, Stephen M. Rappaport, William E. Funk, Anthony T. Iavarone, M. Demireva, H. Gregoryan, Sixin Samantha Lu, Evan R. Williams, Sherri Rose, and M. J. van der Laan
- Subjects
medicine.medical_specialty ,Endocrinology ,biology ,Adductomics ,Chemistry ,Internal medicine ,biology.protein ,medicine ,General Medicine ,Bovine serum albumin ,Toxicology ,Human serum albumin ,medicine.drug - Published
- 2010
137. The effects of co-morbidity in defining major depression subtypes associated with long-term course and severity
- Author
-
Maria Carmen Viana, Klaas J. Wardenaar, Andrew A. Nierenberg, Michaela Gruber, Sherri Rose, Junlong Li, Brendan Bunting, Daphna Levinson, Zahari Zarkov, Maurizio Fava, Akira Fukao, S. Florescu, Y. Huang, Aimee N. Karam, Robert A. Schoevers, Miguel Xavier, Kate M. Scott, Evelyn J. Bromet, Jordi Alonso, Ronald C. Kessler, Nancy A. Sampson, J. Posada-Villa, P. de Jonge, Oye Gureje, Chiyi Hu, Marsha A. Wilcox, M. E. Medina Mora, Nezar Ismet Taib, H. M. van Loo, M. Petukhova, Tianxi Cai, Interdisciplinary Centre Psychopathology and Emotion regulation (ICPE), Life Course Epidemiology (LCE), and Perceptual and Cognitive Neuroscience (PCN)
- Subjects
Male ,DISORDER ,Co-morbidity ,Comorbidity ,Melancholic depression ,Global Health ,depression symptom profiles ,Severity of Illness Index ,Cluster Analysis ,RECURSIVE PARTITIONING ANALYSIS ,Depressió psíquica ,PREDICTORS ,Applied Psychology ,POPULATION ,Aged, 80 and over ,RISK ,education.field_of_study ,risk assessment ,Middle Aged ,Latent class model ,Psychiatry and Mental health ,machine learning ,Disease Progression ,REGULARIZATION ,Major depressive disorder ,Anxiety ,Female ,medicine.symptom ,Psychology ,predictive modeling ,Psychopathology ,Clinical psychology ,Adult ,LARGE-SAMPLE ,medicine.medical_specialty ,Adolescent ,Population ,Article ,Young Adult ,Comorbiditat ,Artificial Intelligence ,Severity of illness ,medicine ,Humans ,education ,Psychiatry ,Aged ,Retrospective Studies ,Depressive Disorder, Major ,Salut mundial ,depression subtypes ,data mining ,INPATIENTS ,REMISSION ,medicine.disease ,elastic net ,SUICIDALITY - Abstract
BACKGROUND: Although variation in the long-term course of major depressive disorder (MDD) is not strongly predicted by existing symptom subtype distinctions, recent research suggests that prediction can be improved by using machine learning methods. However, it is not known whether these distinctions can be refined by added information about co-morbid conditions. The current report presents results on this question. METHOD: Data came from 8261 respondents with lifetime DSM-IV MDD in the World Health Organization (WHO) World Mental Health (WMH) Surveys. Outcomes included four retrospectively reported measures of persistence/severity of course (years in episode; years in chronic episodes; hospitalization for MDD; disability due to MDD). Machine learning methods (regression tree analysis; lasso, ridge and elastic net penalized regression) followed by k-means cluster analysis were used to augment previously detected subtypes with information about prior co-morbidity to predict these outcomes. RESULTS: Predicted values were strongly correlated across outcomes. Cluster analysis of predicted values found three clusters with consistently high, intermediate or low values. The high-risk cluster (32.4% of cases) accounted for 56.6-72.9% of high persistence, high chronicity, hospitalization and disability. This high-risk cluster had both higher sensitivity and likelihood ratio positive (LR+; relative proportions of cases in the high-risk cluster versus other clusters having the adverse outcomes) than in a parallel analysis that excluded measures of co-morbidity as predictors. CONCLUSIONS: Although the results using the retrospective data reported here suggest that useful MDD subtyping distinctions can be made with machine learning and clustering across multiple indicators of illness persistence/severity, replication with prospective data is needed to confirm this preliminary conclusion. The World Health Organization World Mental Health (WMH) Survey Initiative is supported by the National Institute of Mental Health (NIMH; R01 MH070884), the John D. and Catherine T. MacArthur Foundation, the Pfizer Foundation, the US Public Health Service (R13-MH066849, R01-MH069864, and R01 DA016558), the Fogarty International Center (FIRCA R03-TW006481). Peter de Jonge is supported by a VICI grant (no: 91812607) from the Netherlands Research Foundation (NWO-ZonMW). The São Paulo Megacity Mental Health Survey is supported by the State of São Paulo Research Foundation (FAPESP) Thematic Project Grant 03/00204-3. The World Mental Health Japan (WMHJ) Survey is supported by the Grant for Research on Psychiatric and Neurological Diseases and Mental Health (H13-SHOGAI-023, H14-TOKUBETSU-026, H16-KOKORO-013) from the Japan Ministry of Health, Labour and Welfare. The Lebanese National Mental Health Survey (L.E.B.A.N.O.N.) is supported by National Institute of Health / Fogarty International Center (R03 TW006481-01). The Mexican National Comorbidity Survey (MNCS) is supported by The National Institute of Psychiatry Ramon de la Fuente (INPRFMDIES 4280) and by the National Council on Science and Technology (CONACyT-G30544- H). The Ukraine Comorbid Mental Disorders during Periods of Social Disruption (CMDPSD) study is funded by the US National Institute of Mental Health (RO1-MH61905). The US National Comorbidity Survey Replication (NCS-R) is supported by the National Institute of Mental Health (NIMH; U01- MH60220), the Robert Wood Johnson Foundation (RWJF; Grant 044708)
138. Clinical Research Reporting Paradigms May Incompletely Describe Participant Identities.
- Author
-
Enache Oana M, Lisa GR, and Sherri R
- Abstract
Reporting of participants' baseline characteristics in clinical research is important for understanding a given study's context and typically occurs in a tabular format. However, this format incompletely and ambiguously describes included participants, as their identities are more fully represented by an intersecting set of sociodemographic characteristics rather than discrete characteristics in a table. Standard tabular reporting practices therefore introduce limitations in assessing a study's representativeness as well as its internal validity and external validity. To address this, we propose the addition of a simple graph that more clearly shows the joint distribution of baseline sociodemographic characteristics in a given study. We also discuss several practical considerations for the implementation of such graphs in the communication of clinical research., (© The Author(s) 2023. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.)
- Published
- 2024
- Full Text
- View/download PDF
139. Many Clinicians Implement Digital Equity Strategies To Treat Opioid Use Disorder.
- Author
-
Uscher-Pines L, Riedel LE, Mehrotra A, Rose S, Busch AB, and Huskamp HA
- Subjects
- Humans, Opioid-Related Disorders drug therapy, Telemedicine
- Abstract
Drawing upon a longitudinal survey of clinicians who treat patients with opioid use disorder (OUD), we report changes over time in telemedicine use, clinicians' attitudes, and digital equity strategies. Clinicians reported less use of telemedicine (both video and audio-only) in 2022 than in 2020. In March 2022, 77.0 percent of clinician respondents reported implementing digital equity strategies to help patients overcome barriers to video visits.
- Published
- 2023
- Full Text
- View/download PDF
140. How Is Telemedicine Being Used In Opioid And Other Substance Use Disorder Treatment?
- Author
-
Huskamp HA, Busch AB, Souza J, Uscher-Pines L, Rose S, Wilcock A, Landon BE, and Mehrotra A
- Subjects
- Adolescent, Adult, Aged, Child, Female, Hospitalization statistics & numerical data, Humans, Insurance Claim Review economics, Male, Medicare Part C statistics & numerical data, Middle Aged, Private Sector statistics & numerical data, Retrospective Studies, United States, Young Adult, Analgesics, Opioid adverse effects, Health Services Accessibility, Insurance Claim Review statistics & numerical data, Substance-Related Disorders therapy, Telemedicine methods, Telemedicine statistics & numerical data
- Abstract
Only a small proportion of people with a substance use disorder (SUD) receive treatment. The shortage of SUD treatment providers, particularly in rural areas, is an important driver of this treatment gap. Telemedicine could be a means of expanding access to treatment. However, several key regulatory and reimbursement barriers to greater use of telemedicine for SUD (tele-SUD) exist, and both Congress and the states are considering or have recently passed legislation to address them. To inform these efforts, we describe how tele-SUD is being used. Using claims data for 2010-17 from a large commercial insurer, we identified characteristics of tele-SUD users and examined how tele-SUD is being used in conjunction with in-person SUD care. Despite a rapid increase in tele-SUD over the study period, we found low use rates overall, particularly relative to the growth in telemental health. Tele-SUD is primarily used to complement in-person care and is disproportionately used by those with relatively severe SUD. Given the severity of the opioid epidemic, low rates of tele-SUD use represent a missed opportunity. As tele-SUD becomes more available, it will be important to monitor closely which tele-SUD delivery models are being used and their impact on access and outcomes.
- Published
- 2018
- Full Text
- View/download PDF
141. Rapid Growth In Mental Health Telemedicine Use Among Rural Medicare Beneficiaries, Wide Variation Across States.
- Author
-
Mehrotra A, Huskamp HA, Souza J, Uscher-Pines L, Rose S, Landon BE, Jena AB, and Busch AB
- Subjects
- Adult, Fee-for-Service Plans, Female, Humans, Male, Mental Health, Middle Aged, United States, Medicare statistics & numerical data, Mental Disorders therapy, Rural Population, Telemedicine statistics & numerical data
- Abstract
Congress and many state legislatures are considering expanding access to telemedicine. To inform this debate, we analyzed Medicare fee-for-service claims for the period 2004-14 to understand trends in and recent use of telemedicine for mental health care, also known as telemental health. The study population consisted of rural beneficiaries with a diagnosis of any mental illness or serious mental illness. The number of telemental health visits grew on average 45.1 percent annually, and by 2014 there were 5.3 and 11.8 telemental health visits per 100 rural beneficiaries with any mental illness or serious mental illness, respectively. There was notable variation across states: In 2014 nine had more than twenty-five visits per 100 beneficiaries with serious mental illness, while four states and the District of Columbia had none. Compared to other beneficiaries with mental illness, beneficiaries who received a telemental health visit were more likely to be younger than sixty-five, be eligible for Medicare because of disability, and live in a relatively poor community. States with a telemedicine parity law and a pro-telemental health regulatory environment had significantly higher rates of telemental health use than those that did not., (Project HOPE—The People-to-People Health Foundation, Inc.)
- Published
- 2017
- Full Text
- View/download PDF
142. Lower- Versus Higher-Income Populations In The Alternative Quality Contract: Improved Quality And Similar Spending.
- Author
-
Song Z, Rose S, Chernew ME, and Safran DG
- Subjects
- Censuses, Female, Humans, Male, Massachusetts, Reimbursement, Incentive economics, Blue Cross Blue Shield Insurance Plans economics, Health Expenditures statistics & numerical data, Income statistics & numerical data, Quality Improvement statistics & numerical data
- Abstract
As population-based payment models become increasingly common, it is crucial to understand how such payment models affect health disparities. We evaluated health care quality and spending among enrollees in areas with lower versus higher socioeconomic status in Massachusetts before and after providers entered into the Alternative Quality Contract, a two-sided population-based payment model with substantial incentives tied to quality. We compared changes in process measures, outcome measures, and spending between enrollees in areas with lower and higher socioeconomic status from 2006 to 2012 (outcome measures were measured after the intervention only). Quality improved for all enrollees in the Alternative Quality Contract after their provider organizations entered the contract. Process measures improved 1.2 percentage points per year more among enrollees in areas with lower socioeconomic status than among those in areas with higher socioeconomic status. Outcome measure improvement was no different between the subgroups; neither were changes in spending. Larger or comparable improvements in quality among enrollees in areas with lower socioeconomic status suggest a potential narrowing of disparities. Strong pay-for-performance incentives within a population-based payment model could encourage providers to focus on improving quality for more disadvantaged populations., (Project HOPE—The People-to-People Health Foundation, Inc.)
- Published
- 2017
- Full Text
- View/download PDF
143. Risk-Adjustment Simulation: Plans May Have Incentives To Distort Mental Health And Substance Use Coverage.
- Author
-
Montz E, Layton T, Busch AB, Ellis RP, Rose S, and McGuire TG
- Subjects
- Adult, Chronic Disease economics, Female, Health Insurance Exchanges economics, Humans, Insurance Coverage economics, Insurance, Health economics, Insurance, Health legislation & jurisprudence, Male, Patient Protection and Affordable Care Act economics, Risk Adjustment legislation & jurisprudence, United States, Computer Simulation, Mental Disorders economics, Motivation, Risk Adjustment economics, Substance-Related Disorders economics
- Abstract
Under the Affordable Care Act, the risk-adjustment program is designed to compensate health plans for enrolling people with poorer health status so that plans compete on cost and quality rather than the avoidance of high-cost individuals. This study examined health plan incentives to limit covered services for mental health and substance use disorders under the risk-adjustment system used in the health insurance Marketplaces. Through a simulation of the program on a population constructed to reflect Marketplace enrollees, we analyzed the cost consequences for plans enrolling people with mental health and substance use disorders. Our assessment points to systematic underpayment to plans for people with these diagnoses. We document how Marketplace risk adjustment does not remove incentives for plans to limit coverage for services associated with mental health and substance use disorders. Adding mental health and substance use diagnoses used in Medicare Part D risk adjustment is one potential policy step toward addressing this problem in the Marketplaces., (Project HOPE—The People-to-People Health Foundation, Inc.)
- Published
- 2016
- Full Text
- View/download PDF
144. Variation In Accountable Care Organization Spending And Sensitivity To Risk Adjustment: Implications For Benchmarking.
- Author
-
Rose S, Zaslavsky AM, and McWilliams JM
- Subjects
- Accountable Care Organizations trends, Aged, Aged, 80 and over, Benchmarking economics, Databases, Factual, Female, Health Care Surveys, Humans, Male, Medicare statistics & numerical data, Retrospective Studies, Sensitivity and Specificity, United States, Accountable Care Organizations economics, Health Expenditures statistics & numerical data, Medicare economics, Risk Adjustment economics
- Abstract
Spending targets (or benchmarks) for accountable care organizations (ACOs) participating in the Medicare Shared Savings Program must be set carefully to encourage program participation while achieving fiscal goals and minimizing unintended consequences, such as penalizing ACOs for serving sicker patients. Recently proposed regulatory changes include measures to make benchmarks more similar for ACOs in the same area with different historical spending levels. We found that ACOs vary widely in how their spending levels compare with those of other local providers after standard case-mix adjustments. Additionally adjusting for survey measures of patient health meaningfully reduced the variation in differences between ACO spending and local average fee-for-service spending, but substantial variation remained, which suggests that differences in care efficiency between ACOs and local non-ACO providers vary widely. Accordingly, measures to equilibrate benchmarks between high- and low-spending ACOs--such as setting benchmarks to risk-adjusted average fee-for-service spending in an area--should be implemented gradually to maintain participation by ACOs with high spending. Use of survey information also could help mitigate perverse incentives for risk selection and upcoding and limit unintended consequences of new benchmarking methodologies for ACOs serving sicker patients., (Project HOPE—The People-to-People Health Foundation, Inc.)
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.