205 results on '"Schaubel DE"'
Search Results
102. Serum sodium and survival benefit of liver transplantation.
- Author
-
Sharma P, Schaubel DE, Goodrich NP, and Merion RM
- Subjects
- Adult, Biomarkers blood, Decision Support Techniques, End Stage Liver Disease blood, End Stage Liver Disease diagnosis, End Stage Liver Disease mortality, Female, Humans, Hyponatremia diagnosis, Hyponatremia mortality, Male, Middle Aged, Patient Selection, Predictive Value of Tests, Proportional Hazards Models, Registries, Risk Factors, Time Factors, Tissue and Organ Procurement, Treatment Outcome, United States, Waiting Lists, End Stage Liver Disease surgery, Hyponatremia blood, Liver Transplantation adverse effects, Liver Transplantation mortality, Sodium blood
- Abstract
Hyponatremia is associated with elevated wait-list mortality among end-stage liver disease candidates for liver transplantation (LT). However, the effect of low serum sodium on the survival benefit of LT has not been examined. We sought to determine whether pretransplant hyponatremia is associated with an altered LT survival benefit. Data were obtained from the Scientific Registry of Transplant Recipients. The study population consisted of adults (age ≥ 18 years) placed on the waiting list for LT between January 1, 2005 and December 31, 2012 (n = 69,213). The effect of hyponatremia on the survival benefit was assessed via sequential stratification, an extension of Cox regression. Each transplant recipient was matched to appropriate candidates then active on the waiting list with the same Model for End-Stage Liver Disease (MELD) score and in the same donation service area. The focus of the analysis was the interaction between the serum sodium and the MELD score with respect to the survival benefit of LT; this was defined as the covariate-adjusted hazard ratio contrasting post-LT mortality and pre-LT mortality. The LT survival benefit increased significantly with decreasing serum sodium values when the MELD scores were >11. The survival benefit of LT was not affected by serum sodium for patients with MELD scores ≤ 11. In conclusion, the LT survival benefit (or lack thereof) is independent of serum sodium for patients with MELD scores ≤ 11. The increase in the survival benefit with decreasing serum sodium among patients with MELD scores > 11 is consistent with recently approved changes to the allocation system incorporating serum sodium., (© 2015 American Association for the Study of Liver Diseases.)
- Published
- 2015
- Full Text
- View/download PDF
103. Semiparametric methods for center effect measures based on the ratio of survival functions.
- Author
-
He K and Schaubel DE
- Subjects
- Computer Simulation, Humans, Kidney Transplantation mortality, Kidney Transplantation statistics & numerical data, Life Tables, Proportional Hazards Models, Models, Statistical, Survival Analysis
- Abstract
The survival function is often of chief interest in epidemiologic studies of time to an event. We develop methods for evaluating center-specific survival outcomes through a ratio of survival functions. The proposed method assumes a center-stratified additive hazards model, which provides a convenient framework for our purposes. Under the proposed methods, the center effects measure is cast as the ratio of subject-specific survival functions under two scenarios: the scenario in which the subject is treated at center [Formula: see text]; and that wherein the subject is treated at a hypothetical center with survival function equal to the population average. The proposed measure reduces to the ratio of baseline survival functions, but is invariant to the choice of baseline covariate level. We derive the asymptotic properties of the proposed estimators, and assess finite-sample characteristics through simulation. The proposed methods are applied to national kidney transplant data.
- Published
- 2014
- Full Text
- View/download PDF
104. A weighted cumulative sum (WCUSUM) to monitor medical outcomes with dependent censoring.
- Author
-
Sun RJ, Kalbfleisch JD, and Schaubel DE
- Subjects
- Biostatistics, Computer Simulation, Humans, Liver Transplantation statistics & numerical data, Models, Statistical, Outcome Assessment, Health Care statistics & numerical data, Patient Selection, Registries statistics & numerical data, Severity of Illness Index, Tissue and Organ Procurement statistics & numerical data, United States epidemiology, Liver Transplantation mortality, Waiting Lists mortality
- Abstract
We develop a weighted cumulative sum (WCUSUM) to evaluate and monitor pre-transplant waitlist mortality of facilities in the context where transplantation is considered to be dependent censoring. Waitlist patients are evaluated multiple times in order to update their current medical condition as reflected in a time-dependent variable called the Model for End-Stage Liver Disease (MELD) score. Higher MELD scores are indicative of higher pre-transplant death risk. Moreover, under the current liver allocation system, patients with higher MELD scores receive higher priority for liver transplantation. To evaluate the waitlist mortality of transplant centers, it is important to take this dependent censoring into consideration. We assume a 'standard' transplant practice through a transplant model and utilize inverse probability censoring weights to construct a WCUSUM. We evaluate the properties of a weighted zero-mean process as the basis of the proposed WCUSUM. We then discuss a resampling technique to obtain control limits. The proposed WCUSUM is illustrated through the analysis of national transplant registry data., (Copyright © 2014 John Wiley & Sons, Ltd.)
- Published
- 2014
- Full Text
- View/download PDF
105. Proportional hazards regression in the presence of missing study eligibility information.
- Author
-
Pan Q and Schaubel DE
- Subjects
- Algorithms, Computer Simulation, Humans, Kidney Failure, Chronic therapy, Kidney Transplantation methods, Tissue Donors, Likelihood Functions, Proportional Hazards Models, Survival Analysis
- Abstract
We consider the study of censored survival times in the situation where the available data consist of both eligible and ineligible subjects, and information distinguishing the two groups is sometimes missing. A complete-case analysis in this context would use only subjects known to be eligible, resulting in inefficient and potentially biased estimators. We propose a two-step procedure which resembles the EM algorithm but is computationally much faster. In the first step, one estimates the conditional expectation of the missing eligibility indicators given the observed data using a logistic regression based on the complete cases (i.e., subjects with non-missing eligibility indicator). In the second step, maximum likelihood estimators are obtained from a weighted Cox proportional hazards model, with the weights being either observed eligibility indicators or estimated conditional expectations thereof. Under ignorable missingness, the estimators from the second step are proven to be consistent and asymptotically normal, with explicit variance estimators. We demonstrate through simulation that the proposed methods perform well for moderate sized samples and are robust in the presence of eligibility indicators that are missing not at random. The proposed procedure is more efficient and more robust than the complete case analysis and, unlike the EM algorithm, does not require time-consuming iteration. Although the proposed methods are applicable generally, they would be most useful for large data sets (e.g., administrative data), for which the computational savings outweigh the price one has to pay for making various approximations in avoiding iteration. We apply the proposed methods to national kidney transplant registry data.
- Published
- 2014
- Full Text
- View/download PDF
106. Methods for comparing center-specific survival outcomes using direct standardization.
- Author
-
He K and Schaubel DE
- Subjects
- Algorithms, Bias, Humans, Kidney Transplantation mortality, Models, Statistical, Outcome Assessment, Health Care statistics & numerical data, Proportional Hazards Models, Outcome Assessment, Health Care methods, Survival Analysis
- Abstract
The evaluation of center-specific outcomes is often through survival analysis methods. Such evaluations must account for differences in the distribution of patient characteristics across centers. In the context of censored event times, it is also important that the measure chosen to evaluate centers not be influenced by imbalances in the center-specific censoring distributions. The practice of using center indicators in a hazard regression model is often invalid, inconvenient, or undesirable to carry out. We propose a semiparametric version of the standardized rate ratio (SRR) useful for the evaluation of centers with respect to a right-censored event time. The SRR for center j can be interpreted as the ratio of the expected number of deaths in the total population (if the total population were in fact subject to the center j mortality hazard) to the observed number of events. The proposed measure is not affected by differences in center-specific covariate or censoring distributions. Asymptotic properties of the proposed estimators are derived, with finite-sample properties examined through simulation studies. The proposed methods are applied to national kidney transplant data., (Copyright © 2014 John Wiley & Sons, Ltd.)
- Published
- 2014
- Full Text
- View/download PDF
107. Methods for Estimating Center Effects on Recurrent Events.
- Author
-
Liu D, Kalbfleisch JD, and Schaubel DE
- Abstract
In this article, we develop methods for quantifying center effects with respect to recurrent event data. In the models of interest, center effects are assumed to act multiplicatively on the recurrent event rate function. When the number of centers is large, traditional estimation methods that treat centers as categorical variables have many parameters and are sometimes not feasible to implement, especially with large numbers of distinct recurrent event times. We propose a new estimation method for center effects which avoids including indicator variables for centers. We then show that center effects can be consistently estimated by the center-specific ratio of observed to expected cumulative numbers of events. We also consider the case where the recurrent event sequence can be stopped permanently by a terminating event. Large sample results are developed for the proposed estimators. We assess the finite-sample properties of the proposed estimators through simulation studies. The method is then applied to national hospital admissions data for end stage renal disease patients.
- Published
- 2014
- Full Text
- View/download PDF
108. Matching methods for obtaining survival functions to estimate the effect of a time-dependent treatment.
- Author
-
Li Y, Schaubel DE, and He K
- Abstract
In observational studies of survival time featuring a binary time-dependent treatment, the hazard ratio (an instantaneous measure) is often used to represent the treatment effect. However, investigators are often more interested in the difference in survival functions. We propose semiparametric methods to estimate the causal effect of treatment among the treated with respect to survival probability. The objective is to compare post-treatment survival with the survival function that would have been observed in the absence of treatment. For each patient, we compute a prognostic score (based on the pre-treatment death hazard) and a propensity score (based on the treatment hazard). Each treated patient is then matched with an alive, uncensored and not-yet-treated patient with similar prognostic and/or propensity scores. The experience of each treated and matched patient is weighted using a variant of Inverse Probability of Censoring Weighting to account for the impact of censoring. We propose estimators of the treatment-specific survival functions (and their difference), computed through weighted Nelson-Aalen estimators. Closed-form variance estimators are proposed which take into consideration the potential replication of subjects across matched sets. The proposed methods are evaluated through simulation, then applied to estimate the effect of kidney transplantation on survival among end-stage renal disease patients using data from a national organ failure registry.
- Published
- 2014
- Full Text
- View/download PDF
109. Disparities in liver transplantation: the association between donor quality and recipient race/ethnicity and sex.
- Author
-
Mathur AK, Schaubel DE, Zhang H, Guidinger MK, and Merion RM
- Subjects
- Adult, Black or African American statistics & numerical data, Asian statistics & numerical data, Cadaver, Female, Hispanic or Latino statistics & numerical data, Humans, Logistic Models, Male, Middle Aged, Proportional Hazards Models, Risk Factors, Sex Distribution, Tissue and Organ Procurement statistics & numerical data, United States epidemiology, White People statistics & numerical data, Ethnicity statistics & numerical data, Graft Survival, Healthcare Disparities ethnology, Healthcare Disparities statistics & numerical data, Liver Transplantation statistics & numerical data, Tissue Donors statistics & numerical data
- Abstract
Background: We aimed to examine the association between recipient race/ethnicity and sex, donor liver quality, and liver transplant graft survival., Methods: Adult non-status 1 liver recipients transplanted between March 1, 2002, and December 31, 2008, were identified using Scientific Registry of Transplant Recipients data. The factors of interest were recipient race/ethnicity and sex. Donor risk index (DRI) was used as a donor quality measure. Logistic regression was used to assess the association between race/ethnicity and sex in relation to the transplantation of low-quality (high DRI) or high-quality (low DRI) livers. Cox regression was used to assess the association between race/ethnicity and sex and liver graft failure risk, accounting for DRI., Results: Hispanics were 21% more likely to receive low-quality grafts compared to whites (odds ratio [OR]=1.21, P=0.002). Women had greater odds of receiving a low-quality graft compared to men (OR=1.24, P<0.0001). Despite adjustment for donor quality, African American recipients still had higher graft failure rates compared to whites (hazard ratio [HR]=1.28, P<0.001). Hispanics (HR=0.89, P=0.023) had significantly lower graft failure rates compared to whites despite higher odds of receiving a higher DRI graft. Using an interaction model of DRI and race/ethnicity, we found that the impact of DRI on graft failure rates was significantly reduced for African Americans compared to whites (P=0.02)., Conclusions: This study shows that while liver graft quality differed significantly by recipient race/ethnicity and sex, donor selection practices do not seem to be the dominant factor responsible for worse liver transplant outcomes for minority recipients.
- Published
- 2014
- Full Text
- View/download PDF
110. Comparison of methods for estimating the effect of salvage therapy in prostate cancer when treatment is given by indication.
- Author
-
Taylor JM, Shen J, Kennedy EH, Wang L, and Schaubel DE
- Subjects
- Computer Simulation, Humans, Male, Neoplasm Recurrence, Local, Prostatic Neoplasms drug therapy, Salvage Therapy standards, Treatment Outcome, Data Interpretation, Statistical, Models, Statistical, Prostate-Specific Antigen blood, Prostatic Neoplasms pathology, Randomized Controlled Trials as Topic methods, Salvage Therapy methods
- Abstract
For patients who were previously treated for prostate cancer, salvage hormone therapy is frequently given when the longitudinal marker prostate-specific antigen begins to rise during follow-up. Because the treatment is given by indication, estimating the effect of the hormone therapy is challenging. In a previous paper we described two methods for estimating the treatment effect, called two-stage and sequential stratification. The two-stage method involved modeling the longitudinal and survival data. The sequential stratification method involves contrasts within matched sets of people, where each matched set includes people who did and did not receive hormone therapy. In this paper, we evaluate the properties of these two methods and compare and contrast them with the marginal structural model methodology. The marginal structural model methodology involves a weighted survival analysis, where the weights are derived from models for the time of hormone therapy. We highlight the different conditional and marginal interpretations of the quantities being estimated by the three methods. Using simulations that mimic the prostate cancer setting, we evaluate bias, efficiency, and accuracy of estimated standard errors and robustness to modeling assumptions. The results show differences between the methods in terms of the quantities being estimated and in efficiency. We also demonstrate how the results of a randomized trial of salvage hormone therapy are strongly influenced by the design of the study and discuss how the findings from using the three methodologies can be used to infer the results of a trial., (Copyright © 2013 John Wiley & Sons, Ltd.)
- Published
- 2014
- Full Text
- View/download PDF
111. Patient-specific prediction of ESRD after liver transplantation.
- Author
-
Sharma P, Goodrich NP, Schaubel DE, Guidinger MK, and Merion RM
- Subjects
- Adult, Cadaver, Female, Humans, Incidence, Male, Medicaid statistics & numerical data, Medicare statistics & numerical data, Middle Aged, Predictive Value of Tests, Proportional Hazards Models, Risk Factors, United States epidemiology, Kidney Failure, Chronic mortality, Liver Transplantation adverse effects, Liver Transplantation mortality, Postoperative Complications mortality
- Abstract
Incident ESRD after liver transplantation (LT) is associated with high post-transplant mortality. We constructed and validated a continuous renal risk index (RRI) to predict post-LT ESRD. Data for 43,514 adult recipients of deceased donor LT alone (February 28, 2002 to December 31, 2010) were linked from the Scientific Registry of Transplant Recipients and the Centers for Medicare and Medicaid Services ESRD Program. An adjusted Cox regression model of time to post-LT ESRD was fitted, and the resulting equation was used to calculate an RRI for each LT recipient. The RRI included 14 recipient factors: age, African-American race, hepatitis C, cholestatic disease, body mass index ≥ 35, pre-LT diabetes, ln creatinine for recipients not on dialysis, ln albumin, ln bilirubin, serum sodium<134 mEq/L, status-1, previous LT, transjugular intrahepatic portosystemic shunt, and acute dialysis at LT. This RRI was validated and had a C statistic of 0.76 (95% confidence interval, 0.75 to 0.78). Higher RRI associated significantly with higher 5-year cumulative incidence of ESRD and post-transplant mortality. In conclusion, the RRI constructed in this study quantifies the risk of post-LT ESRD and is applicable to all LT alone recipients. This new validated measure may serve as an important prognostic tool in ameliorating post-LT ESRD risk and improve survival by informing post-LT patient management strategies.
- Published
- 2013
- Full Text
- View/download PDF
112. Short-term pretransplant renal replacement therapy and renal nonrecovery after liver transplantation alone.
- Author
-
Sharma P, Goodrich NP, Zhang M, Guidinger MK, Schaubel DE, and Merion RM
- Subjects
- Acute Kidney Injury diagnosis, Acute Kidney Injury mortality, Acute Kidney Injury physiopathology, Centers for Medicare and Medicaid Services, U.S., Disease Progression, Female, Hepatorenal Syndrome diagnosis, Hepatorenal Syndrome mortality, Hepatorenal Syndrome physiopathology, Humans, Kidney Failure, Chronic mortality, Kidney Failure, Chronic physiopathology, Kidney Failure, Chronic therapy, Kidney Transplantation, Male, Middle Aged, Preoperative Care, Proportional Hazards Models, Recovery of Function, Registries, Retrospective Studies, Risk Factors, Time Factors, Tissue and Organ Procurement, Treatment Outcome, United States, Waiting Lists, Acute Kidney Injury therapy, Hepatorenal Syndrome therapy, Kidney physiopathology, Liver Transplantation adverse effects, Liver Transplantation mortality, Renal Replacement Therapy adverse effects, Renal Replacement Therapy mortality
- Abstract
Background and Objectives: Candidates with AKI including hepatorenal syndrome often recover renal function after successful liver transplantation (LT). This study examined the incidence and risk factors associated with renal nonrecovery within 6 months of LT alone among those receiving acute renal replacement therapy (RRT) before LT., Design, Setting, Participants, & Measurements: Scientific Registry of Transplant Recipients data were linked with Centers for Medicare and Medicaid Services ESRD data for 2112 adult deceased-donor LT-alone recipients who received acute RRT for ≤90 days before LT (February 28, 2002 to August 31, 2010). Primary outcome was renal nonrecovery (post-LT ESRD), defined as transition to chronic dialysis or waitlisting or receipt of kidney transplant within 6 months of LT. Cumulative incidence of renal nonrecovery was calculated using competing risk analysis. Cox regression identified recipient and donor predictors of renal nonrecovery., Results: The cumulative incidence of renal nonrecovery after LT alone among those receiving the pre-LT acute RRT was 8.9%. Adjusted renal nonrecovery risk increased by 3.6% per day of pre-LT RRT (P<0.001). Age at LT per 5 years (P=0.02), previous-LT (P=0.01), and pre-LT diabetes (P<0.001) were significant risk factors of renal nonrecovery. Twenty-one percent of recipients died within 6 months of LT. Duration of pretransplant RRT did not predict 6-month post-transplant mortality., Conclusions: Among recipients on acute RRT before LT who survived after LT alone, the majority recovered their renal function within 6 months of LT. Longer pre-LT RRT duration, advanced age, diabetes, and re-LT were significantly associated with increased risk of renal nonrecovery.
- Published
- 2013
- Full Text
- View/download PDF
113. An estimating function approach to the analysis of recurrent and terminal events.
- Author
-
Kalbfleisch JD, Schaubel DE, Ye Y, and Gong Q
- Subjects
- Computer Simulation, Hospitalization statistics & numerical data, Humans, Kidney Failure, Chronic mortality, Kidney Failure, Chronic therapy, Proportional Hazards Models, Recurrence, Renal Dialysis statistics & numerical data, Biometry methods, Models, Statistical
- Abstract
In clinical and observational studies, the event of interest can often recur on the same subject. In a more complicated situation, there exists a terminal event (e.g., death) which stops the recurrent event process. In many such instances, the terminal event is strongly correlated with the recurrent event process. We consider the recurrent/terminal event setting and model the dependence through a shared gamma frailty that is included in both the recurrent event rate and terminal event hazard functions. Conditional on the frailty, a model is specified only for the marginal recurrent event process, hence avoiding the strong Poisson-type assumptions traditionally used. Analysis is based on estimating functions that allow for estimation of covariate effects on the recurrent event rate and terminal event hazard. The method also permits estimation of the degree of association between the two processes. Closed-form asymptotic variance estimators are proposed. The proposed method is evaluated through simulations to assess the applicability of the asymptotic results in finite samples and the sensitivity of the method to its underlying assumptions. The methods can be extended in straightforward ways to accommodate multiple types of recurrent and terminal events. Finally, the methods are illustrated in an analysis of hospitalization data for patients in an international multi-center study of outcomes among dialysis patients., (© 2013, The International Biometric Society.)
- Published
- 2013
- Full Text
- View/download PDF
114. Partly conditional estimation of the effect of a time-dependent factor in the presence of dependent censoring.
- Author
-
Gong Q and Schaubel DE
- Subjects
- Algorithms, End Stage Liver Disease mortality, Humans, Longitudinal Studies, Models, Statistical, Proportional Hazards Models, Time Factors, Biometry methods, Survival Analysis
- Abstract
We propose semiparametric methods for estimating the effect of a time-dependent covariate on treatment-free survival. The data structure of interest consists of a longitudinal sequence of measurements and a potentially censored survival time. The factor of interest is time-dependent. Treatment-free survival is of interest and is dependently censored by the receipt of treatment. Patients may be removed from consideration for treatment, temporarily or permanently. The proposed methods combine landmark analysis and partly conditional hazard regression. A set of calendar time cross-sections is specified, and survival time (from cross-section date) is modeled through weighted Cox regression. The assumed model for death is marginal in the sense that time-varying covariates are taken as fixed at each landmark, with the mortality hazard function implicitly averaging across future covariate trajectories. Dependent censoring is overcome by a variant of inverse probability of censoring weighting (IPCW). The proposed estimators are shown to be consistent and asymptotically normal, with consistent covariance estimators provided. Simulation studies reveal that the proposed estimation procedures are appropriate for practical use. We apply the proposed methods to pre-transplant mortality among end-stage liver disease (ESLD) patients., (© 2013, The International Biometric Society.)
- Published
- 2013
- Full Text
- View/download PDF
115. Contrasting treatment-specific survival using double-robust estimators.
- Author
-
Zhang M and Schaubel DE
- Subjects
- Humans, Kaplan-Meier Estimate, Kidney Transplantation mortality, Kidney Transplantation statistics & numerical data, Multicenter Studies as Topic, Netherlands epidemiology, Observation, Outcome Assessment, Health Care statistics & numerical data, Proportional Hazards Models, Statistics, Nonparametric, Outcome Assessment, Health Care methods, Survival Analysis
- Abstract
In settings where a randomized trial is infeasible, observational data are frequently used to compare treatment-specific survival. The average causal effect (ACE) can be used to make inference regarding treatment policies on patient populations, and a valid ACE estimator must account for imbalances with respect to treatment-specific covariate distributions. One method through which the ACE on survival can be estimated involves appropriately averaging over Cox-regression-based fitted survival functions. A second available method balances the treatment-specific covariate distributions through inverse probability of treatment weighting and then contrasts weighted nonparametric survival function estimators. Because both methods have their advantages and disadvantages, we propose methods that essentially combine both estimators. The proposed methods are double robust, in the sense that they are consistent if at least one of the two working regression models (i.e., logistic model for treatment and Cox model for death hazard) is correct. The proposed methods involve estimating the ACE with respect to restricted mean survival time, defined as the area under the survival curve up to some prespecified time point. We derive and evaluate asymptotic results through simulation. We apply the proposed methods to estimate the ACE of donation-after-cardiac-death kidney transplantation with the use of data obtained from multiple centers in the Netherlands., (Copyright © 2012 John Wiley & Sons, Ltd.)
- Published
- 2012
- Full Text
- View/download PDF
116. Choice of reference in the evaluation of the day-of-week effect on mortality on hemodialysis.
- Author
-
Zhang H, Schaubel DE, Kalbfleisch JD, Robinson BM, Pisoni RL, Port FK, and Saran R
- Subjects
- Female, Humans, Male, Kidney Diseases mortality, Kidney Diseases therapy, Outcome and Process Assessment, Health Care, Practice Patterns, Physicians' statistics & numerical data, Renal Dialysis mortality
- Published
- 2012
- Full Text
- View/download PDF
117. Factors that affect deceased donor liver transplantation rates in the United States in addition to the Model for End-stage Liver Disease score.
- Author
-
Sharma P, Schaubel DE, Messersmith EE, Guidinger MK, and Merion RM
- Subjects
- Blood Grouping and Crossmatching, End Stage Liver Disease diagnosis, End Stage Liver Disease mortality, Female, Histocompatibility, Humans, Male, Middle Aged, Proportional Hazards Models, Registries, Residence Characteristics, Retrospective Studies, Severity of Illness Index, Sex Factors, Time Factors, Tissue and Organ Procurement, United States, Decision Support Techniques, End Stage Liver Disease surgery, Health Status Indicators, Healthcare Disparities, Liver Transplantation adverse effects, Liver Transplantation immunology, Tissue Donors supply & distribution, Waiting Lists mortality
- Abstract
Under an ideal implementation of Model for End-Stage Liver Disease (MELD)-based liver allocation, the only factors that would predict deceased donor liver transplantation (DDLT) rates would be the MELD score, blood type, and donation service area (DSA). We aimed to determine whether additional factors are associated with DDLT rates in actual practice. Data from the Scientific Registry of Transplant Recipients for all adult candidates wait-listed between March 1, 2002 and December 31, 2008 (n = 57,503) were analyzed. Status 1 candidates were excluded. Cox regression was used to model covariate-adjusted DDLT rates, which were stratified by the DSA, blood type, liver-intestine policy, and allocation MELD score. Inactive time on the wait list was not modeled, so the computed DDLT hazard ratios (HRs) were interpreted as active wait-list candidates. Many factors, including the candidate's age, sex, diagnosis, hospitalization status, and height, prior DDLT, and combined listing for liver-kidney or liver-intestine transplantation, were significantly associated with DDLT rates. Factors associated with significantly lower covariate-adjusted DDLT rates were a higher serum creatinine level (HR = 0.92, P < 0.001), a higher bilirubin level (HR = 0.99, P = 0.001), and the receipt of dialysis (HR = 0.83, P < 0.001). Mild ascites (HR = 1.15, P < 0.001) and hepatic encephalopathy (grade 1 or 2, HR = 1.05, P = 0.02; grade 3 or 4, HR = 1.10, P = 0.01) were associated with significantly higher adjusted DDLT rates. In conclusion, adjusted DDLT rates for actively listed candidates are affected by many factors aside from those integral to the allocation system; these factors include the components of the MELD score itself as well as candidate factors that were considered but were deliberately omitted from the MELD score in order to keep it objective. These results raise the question whether additional candidate characteristics should be explicitly incorporated into the prioritization of wait-list candidates because such factors are already systematically affecting DDLT rates under the current allocation system., (Copyright © 2012 American Association for the Study of Liver Diseases.)
- Published
- 2012
- Full Text
- View/download PDF
118. Double-robust semiparametric estimator for differences in restricted mean lifetimes in observational studies.
- Author
-
Zhang M and Schaubel DE
- Subjects
- Biometry methods, Data Interpretation, Statistical, Humans, Incidence, Risk Assessment, United States epidemiology, Epidemiologic Methods, Kidney Transplantation mortality, Models, Statistical, Pancreas Transplantation mortality, Proportional Hazards Models, Survival Analysis, Survival Rate
- Abstract
Restricted mean lifetime is often of direct interest in epidemiologic studies involving censored survival times. Differences in this quantity can be used as a basis for comparing several groups. For example, transplant surgeons, nephrologists, and of course patients are interested in comparing posttransplant lifetimes among various types of kidney transplants to assist in clinical decision making. As the factor of interest is not randomized, covariate adjustment is needed to account for imbalances in confounding factors. In this report, we use semiparametric theory to develop an estimator for differences in restricted mean lifetimes although accounting for confounding factors. The proposed method involves building working models for the time-to-event and coarsening mechanism (i.e., group assignment and censoring). We show that the proposed estimator possesses the double robust property; i.e., when either the time-to-event or coarsening process is modeled correctly, the estimator is consistent and asymptotically normal. Simulation studies are conducted to assess its finite-sample performance and the method is applied to national kidney transplant data., (© 2012, The International Biometric Society.)
- Published
- 2012
- Full Text
- View/download PDF
119. Analytic morphomics, core muscle size, and surgical outcomes.
- Author
-
Englesbe MJ, Lee JS, He K, Fan L, Schaubel DE, Sheetz KH, Harbaugh CM, Holcombe SA, Campbell DA Jr, Sonnenday CJ, and Wang SC
- Subjects
- Adult, Aged, Female, Humans, Kaplan-Meier Estimate, Logistic Models, Male, Middle Aged, Psoas Muscles, Risk Assessment, Risk Factors, Tomography, X-Ray Computed, Vascular Surgical Procedures mortality, Muscle, Skeletal anatomy & histology, Surgical Procedures, Operative mortality
- Abstract
Objective: Assess the relationship between lean core muscle size, measured on preoperative cross-sectional images, and surgical outcomes., Background: Novel measures of preoperative risk are needed. Analytic morphomic analysis of cross-sectional diagnostic images may elucidate vast amounts of patient-specific data, which are never assessed by clinicians., Methods: The study population included all patients within the Michigan Surgical Quality Collaborative database with a computerized tomography(CT) scan before major, elective general or vascular surgery (N = 1453). The lean core muscle size was calculated using analytic morphomic techniques. The primary outcome measure was survival, whereas secondary outcomes included surgical complications and costs. Covariate adjusted outcomes were assessed using Kaplan-Meier analysis, multivariate cox regression, multivariate logistic regression, and generalized estimating equation methods., Results: The mean follow-up was 2.3 years and 214 patients died during the observation period. The covariate-adjusted hazard ratio for lean core muscle area was 1.45 (P = 0.028), indicating that mortality increased by 45% per 1000 mm(2) decrease in lean core muscle area. When stratified into tertiles of core muscle size, the 1-year survival was 87% versus 95% for the smallest versus largest tertile, whereas the 3-year survival was 75% versus 91%, respectively (P < 0.003 for both comparisons). The estimated average risk of complications significantly differed and was 20.9%, 15.0%, and 12.3% in the lower, middle, and upper tertiles of lean core muscle area, respectively. Covariate-adjusted cost increased significantly by an estimated $10,110 per 1000 mm(2) decrease in core muscle size (P = 0.003)., Conclusions: Core muscle size is an independent and potentially important preoperative risk factor. The techniques used to assess preoperative CT scans, namely analytic morphomics, may represent a novel approach to better understanding patient risk.
- Published
- 2012
- Full Text
- View/download PDF
120. Computationally efficient marginal models for clustered recurrent event data.
- Author
-
Liu D, Schaubel DE, and Kalbfleisch JD
- Subjects
- Cluster Analysis, Computer Simulation, Data Interpretation, Statistical, Databases, Factual statistics & numerical data, Hospitalization statistics & numerical data, Humans, Kidney Failure, Chronic therapy, Recurrence, Biometry methods, Models, Statistical
- Abstract
Large observational databases derived from disease registries and retrospective cohort studies have proven very useful for the study of health services utilization. However, the use of large databases may introduce computational difficulties, particularly when the event of interest is recurrent. In such settings, grouping the recurrent event data into prespecified intervals leads to a flexible event rate model and a data reduction that remedies the computational issues. We propose a possibly stratified marginal proportional rates model with a piecewise-constant baseline event rate for recurrent event data. Both the absence and the presence of a terminal event are considered. Large-sample distributions are derived for the proposed estimators. Simulation studies are conducted under various data configurations, including settings in which the model is misspecified. Guidelines for interval selection are provided and assessed using numerical studies. We then show that the proposed procedures can be carried out using standard statistical software (e.g., SAS, R). An application based on national hospitalization data for end-stage renal disease patients is provided., (© 2011, The International Biometric Society.)
- Published
- 2012
- Full Text
- View/download PDF
121. Dialysis outcomes and analysis of practice patterns suggests the dialysis schedule affects day-of-week mortality.
- Author
-
Zhang H, Schaubel DE, Kalbfleisch JD, Bragg-Gresham JL, Robinson BM, Pisoni RL, Canaud B, Jadoul M, Akiba T, Saito A, Port FK, and Saran R
- Subjects
- Aged, Cause of Death, Europe epidemiology, Female, Humans, Japan epidemiology, Male, Middle Aged, Proportional Hazards Models, Renal Dialysis adverse effects, Risk Assessment, Risk Factors, Survival Analysis, Time Factors, Treatment Outcome, United States epidemiology, Kidney Diseases mortality, Kidney Diseases therapy, Outcome and Process Assessment, Health Care, Practice Patterns, Physicians' statistics & numerical data, Renal Dialysis mortality
- Abstract
The risk of death for hemodialysis patients is thought to be highest on the days following the longest interval without dialysis (usually Mondays and Tuesdays); however, existing results are inconclusive. To clarify this we analyzed Dialysis Outcomes and Practice Patterns Study (DOPPS) data of 22,163 hemodialysis patients from the United States, Europe, and Japan. Our study focused on the association between dialysis schedule and day of the week of all-cause, cardiovascular, and noncardiovascular mortality with day-of-week coded as a time-dependent covariate. The models were adjusted for dialysis schedule, age, country, DOPPS phase I or II, and other demographic and clinical covariates, and compared mortality on each day to the 7-day average. Patients on a Monday-Wednesday-Friday (MWF) schedule had elevated all-cause mortality on Mondays, and those on a Tuesday-Thursday-Saturday (TTS) schedule had increased risk of mortality on Tuesdays in all three regions. The association between day-of-week mortality and schedule was generally stronger for cardiovascular than noncardiovascular mortality, and was most pronounced in the United States. Unexpectedly, Japanese patients on a MWF schedule had a higher risk of noncardiovascular mortality on Fridays, and European patients on a TTS schedule experienced an elevated cardiovascular mortality on Saturdays. Thus, future studies are needed to evaluate the influence of practice patterns on schedule-specific mortality and factors that could modulate this effect.
- Published
- 2012
- Full Text
- View/download PDF
122. End-stage liver disease candidates at the highest model for end-stage liver disease scores have higher wait-list mortality than status-1A candidates.
- Author
-
Sharma P, Schaubel DE, Gong Q, Guidinger M, and Merion RM
- Subjects
- Acetaminophen poisoning, Adult, Aged, Analgesics, Non-Narcotic poisoning, Cohort Studies, Female, Humans, Kaplan-Meier Estimate, Liver Failure, Acute chemically induced, Male, Middle Aged, Predictive Value of Tests, Registries statistics & numerical data, Risk Factors, Young Adult, Liver Failure, Acute mortality, Liver Failure, Acute surgery, Liver Transplantation statistics & numerical data, Severity of Illness Index, Waiting Lists mortality
- Abstract
Unlabelled: Candidates with fulminant hepatic failure (Status-1A) receive the highest priority for liver transplantation (LT) in the United States. However, no studies have compared wait-list mortality risk among end-stage liver disease (ESLD) candidates with high Model for End-Stage Liver Disease (MELD) scores to those listed as Status-1A. We aimed to determine if there are MELD scores for ESLD candidates at which their wait-list mortality risk is higher than that of Status-1A, and to identify the factors predicting wait-list mortality among those who are Status-1A. Data were obtained from the Scientific Registry of Transplant Recipients for adult LT candidates (n = 52,459) listed between September 1, 2001, and December 31, 2007. Candidates listed for repeat LT as Status-1 A were excluded. Starting from the date of wait listing, candidates were followed for 14 days or until the earliest occurrence of death, transplant, or granting of an exception MELD score. ESLD candidates were categorized by MELD score, with a separate category for those with calculated MELD > 40. We compared wait-list mortality between each MELD category and Status-1A (reference) using time-dependent Cox regression. ESLD candidates with MELD > 40 had almost twice the wait-list mortality risk of Status-1A candidates, with a covariate-adjusted hazard ratio of HR = 1.96 (P = 0.004). There was no difference in wait-list mortality risk for candidates with MELD 36-40 and Status-1A, whereas candidates with MELD < 36 had significantly lower mortality risk than Status-1A candidates. MELD score did not significantly predict wait-list mortality among Status-1A candidates (P = 0.18). Among Status-1A candidates with acetaminophen toxicity, MELD was a significant predictor of wait-list mortality (P < 0.0009). Posttransplant survival was similar for Status-1A and ESLD candidates with MELD > 20 (P = 0.6)., Conclusion: Candidates with MELD > 40 have significantly higher wait-list mortality and similar posttransplant survival as candidates who are Status-1A, and therefore, should be assigned higher priority than Status-1A for allocation. Because ESLD candidates with MELD 36-40 and Status-1A have similar wait-list mortality risk and posttransplant survival, these candidates should be assigned similar rather than sequential priority for deceased donor LT., (Copyright © 2011 American Association for the Study of Liver Diseases.)
- Published
- 2012
- Full Text
- View/download PDF
123. Semiparametric Transformation Rate Model for Recurrent Event Data.
- Author
-
Zeng D, Schaubel DE, and Cai J
- Abstract
In this article, we propose a class of semiparametric transformation rate models for recurrent event data subject to right-censoring and potentially stopped by a terminating event (e.g., death). These transformation models include both additive rates model and proportional rates model as special cases. Respecting the property that no recurrent events can occur after the terminating event, we model the conditional recurrent event rate given survival. Weighted estimating equations are constructed to estimate the regression coefficients and baseline rate function. In particular, the baseline rate function is approximated by wavelet function. Asymptotic properties of the proposed estimators are derived and a data-dependent criterion is proposed for selecting the most suitable transformation. Simulation studies show that the proposed estimators perform well for practical sample sizes. The proposed methods are used in two real-data examples: a randomized trial of rhDNase and a community trial of Vitamin A.
- Published
- 2011
- Full Text
- View/download PDF
124. Impact of MELD-based allocation on end-stage renal disease after liver transplantation.
- Author
-
Sharma P, Schaubel DE, Guidinger MK, Goodrich NP, Ojo AO, and Merion RM
- Subjects
- Adult, Aged, End Stage Liver Disease classification, Female, Health Care Rationing, Humans, Liver Transplantation mortality, Male, Middle Aged, Patient Selection, Proportional Hazards Models, Risk Factors, United States epidemiology, End Stage Liver Disease surgery, Kidney Failure, Chronic etiology, Liver Transplantation adverse effects
- Abstract
The proportion of patients undergoing liver transplantation (LT), with concomitant renal dysfunction, markedly increased after allocation by the model for end-stage liver disease (MELD) score was introduced. We examined the incidence of subsequent post-LT end-stage renal disease (ESRD) before and after the policy was implemented. Data on all adult deceased donor LT recipients between April 27, 1995 and December 31, 2008 (n = 59 242), from the Scientific Registry of Transplant Recipients, were linked with Centers for Medicare & Medicaid Services' ESRD data. Cox regression was used to (i) compare pre-MELD and MELD eras with respect to post-LT ESRD incidence, (ii) determine the risk factors for post-LT ESRD and (iii) quantify the association between ESRD incidence and mortality. Crude rates of post-LT ESRD were 12.8 and 14.5 per 1000 patient-years in the pre-MELD and MELD eras, respectively. Covariate-adjusted post-LT ESRD risk was higher in the MELD era (hazard ratio [HR]= 1.15; p = 0.0049). African American race, hepatitis C, pre-LT diabetes, higher creatinine, lower albumin, lower bilirubin and sodium >141 mmol/L at LT were also significant predictors of post-LT ESRD. Post-LT ESRD was associated with higher post-LT mortality (HR = 3.32; p < 0.0001). The risk of post-LT ESRD, a strong predictor of post-LT mortality, is 15% higher in the MELD era. This study identified potentially modifiable risk factors of post-LT ESRD. Early intervention and modification of these risk factors may reduce the burden of post-LT ESRD., (©2011 The Authors Journal compilation © 2011 The American Society of Transplantation and the American Society of Transplant Surgeons.)
- Published
- 2011
- Full Text
- View/download PDF
125. Evidence-based development of liver allocation: a review.
- Author
-
Merion RM, Sharma P, Mathur AK, and Schaubel DE
- Subjects
- Child, Evidence-Based Medicine, Humans, Liver pathology, Organ Preservation, Registries, Time Factors, Treatment Outcome, Waiting Lists, End Stage Liver Disease therapy, Liver Transplantation methods, Tissue and Organ Procurement methods
- Abstract
Liver transplantation has undergone a rapid evolution from a high-risk experimental procedure to a mainstream therapy for thousands of patients with a wide range of hepatic diseases. Its increasing success has been accompanied by progressive imbalance between organ donor supply and the patients who might benefit. Where demand outstrips supply in transplantation, a system of organ allocation is inevitably required to make the wisest use of the available, but scarce, organs. Early attempts to rationally allocate donor livers were particularly hampered by lack of available and suitable data, leading to imperfect solutions that created or exacerbated inequities in the system. The advent and maturation of evidence-based predictors of waiting list mortality risk led to more objective criteria for liver allocation, aided by the increasing availability of data on large numbers of patients. Until now, the vast majority of allocation systems for liver transplantation have relied on estimation of waiting list mortality. Evidence-based allocation systems that incorporate measures of post-transplant outcomes are conceptually attractive and these transplant benefit-based allocation systems have been developed, modeled, and subjected to computer simulation. Future implementations of benefit-based liver allocation await continued refinement and additional debate in the transplant community., (© 2011 The Authors. Transplant International © 2011 European Society for Organ Transplantation.)
- Published
- 2011
- Full Text
- View/download PDF
126. Estimating differences in restricted mean lifetime using observational data subject to dependent censoring.
- Author
-
Zhang M and Schaubel DE
- Subjects
- Epidemiologic Studies, Humans, Liver Transplantation, Waiting Lists, Data Interpretation, Statistical, Models, Statistical, Survival Analysis
- Abstract
In epidemiologic studies of time to an event, mean lifetime is often of direct interest. We propose methods to estimate group- (e.g., treatment-) specific differences in restricted mean lifetime for studies where treatment is not randomized and lifetimes are subject to both dependent and independent censoring. The proposed methods may be viewed as a hybrid of two general approaches to accounting for confounders. Specifically, treatment-specific proportional hazards models are employed to account for baseline covariates, while inverse probability of censoring weighting is used to accommodate time-dependent predictors of censoring. The average causal effect is then obtained by averaging over differences in fitted values based on the proportional hazards models. Large-sample properties of the proposed estimators are derived and simulation studies are conducted to assess their finite-sample applicability. We apply the proposed methods to liver wait list mortality data from the Scientific Registry of Transplant Recipients., (© 2010, The International Biometric Society.)
- Published
- 2011
- Full Text
- View/download PDF
127. Sex-based disparities in liver transplant rates in the United States.
- Author
-
Mathur AK, Schaubel DE, Gong Q, Guidinger MK, and Merion RM
- Subjects
- Adult, End Stage Liver Disease diagnosis, Female, Humans, Male, Middle Aged, United States epidemiology, Waiting Lists, Liver Transplantation statistics & numerical data
- Abstract
We sought to characterize sex-based differences in access to deceased donor liver transplantation. Scientific Registry of Transplant Recipients data were used to analyze n = 78 998 adult candidates listed before (8/1997-2/2002) or after (2/2002-2/2007) implementation of Model for End-Stage Liver Disease (MELD)-based liver allocation. The primary outcome was deceased donor liver transplantation. Cox regression was used to estimate covariate-adjusted differences in transplant rates by sex. Females represented 38% of listed patients in the pre-MELD era and 35% in the MELD era. Females had significantly lower covariate-adjusted transplant rates in the pre-MELD era (by 9%; p < 0.0001) and in the MELD era (by 14%; p < 0.0001). In the MELD era, the disparity in transplant rate for females increased as waiting list mortality risk increased, particularly for MELD scores ≥15. Substantial geographic variation in sex-based differences in transplant rates was observed. Some areas of the United States had more than a 30% lower covariate-adjusted transplant rate for females compared to males in the MELD era. In conclusion, the disparity in liver transplant rates between females and males has increased in the MELD era. It is especially troubling that the disparity is magnified among patients with high MELD scores and in certain regions of the United States., (©2011 The Authors Journal compilation©2011 The American Society of Transplantation and the American Society of Transplant Surgeons.)
- Published
- 2011
- Full Text
- View/download PDF
128. Intentional versus unintentional contact as a mechanism of injury in youth ice hockey.
- Author
-
Darling SR, Schaubel DE, Baker JG, Leddy JJ, Bisson LJ, and Willer B
- Subjects
- Adolescent, Athletic Injuries epidemiology, Athletic Injuries etiology, Brain Concussion etiology, Child, Child, Preschool, Fractures, Bone etiology, Hockey statistics & numerical data, Humans, Incidence, Joint Dislocations etiology, Male, Ontario epidemiology, Prospective Studies, Sprains and Strains etiology, Hockey injuries
- Abstract
Background: Youth ice hockey injury rates and mechanisms have been described by various classification systems. Intentional versus unintentional contact was used to classify mechanisms of injuries. All injuries (n=247) in one youth hockey programme over a 5-year period were recorded and included in the analysis., Purpose: To evaluate youth ice hockey injuries and compare programmes that allow body checking versus programmes that do not allow body checking. A primary goal was to determine whether programmes that allow body checking have increased injury rates from intentional body contact. Another goal was to describe the rates of injury across ages, levels of competitive play and during games versus practices., Methods: Rates of injury were compared for three levels of competition (house, select and representative) for approximately 3000 boys aged 4-18 years over a 5-year period. This represents 13 292 player years. Data were collected prospectively in this cohort study. All injuries were reported prospectively by a designated team official and verified by a physician. The log injury rate (per 1000 player hours) was modelled via Poisson regression with log player hours used as an offset. Rate ratio was used to explain the covariate-adjusted injury rate for each of three groups (all injuries, intentional injuries, unintentional injuries)., Results: Unintentional contacts accounted for 66.0% of overall injuries (95% CI 60.0 to 72.0), compared with 34.0% from intentional contacts (p<0.001; Z=5.25). Serious injuries (fractures, dislocations, concussions) resulted more often from unintentional collisions (p=0.04). Players in more competitive leagues that allow body checking had a greater incidence of total injuries than less competitive leagues., Conclusions: Most injuries in the youth hockey programme studied were the result of unintentional contact, and were generally more severe. These findings were not expected given previously published research.
- Published
- 2011
- Full Text
- View/download PDF
129. Frailty, core muscle size, and mortality in patients undergoing open abdominal aortic aneurysm repair.
- Author
-
Lee JS, He K, Harbaugh CM, Schaubel DE, Sonnenday CJ, Wang SC, Englesbe MJ, and Eliason JL
- Subjects
- Aged, Aortic Aneurysm, Abdominal diagnostic imaging, Elective Surgical Procedures, Female, Frail Elderly, Humans, Kaplan-Meier Estimate, Linear Models, Male, Michigan, Middle Aged, Organ Size, Patient Selection, Proportional Hazards Models, Retrospective Studies, Risk Assessment, Risk Factors, Survival Rate, Time Factors, Treatment Outcome, Vascular Surgical Procedures adverse effects, Aortic Aneurysm, Abdominal mortality, Aortic Aneurysm, Abdominal surgery, Aortography methods, Psoas Muscles diagnostic imaging, Tomography, X-Ray Computed, Vascular Surgical Procedures mortality
- Abstract
Objectives: Determining operative risk in patients undergoing aortic surgery is a difficult process, as multiple variables converge to affect overall mortality. Patient frailty is certainly a contributing factor, but is difficult to measure, with surgeons often relying on subjective or intuitive influences. We sought to use core muscle size as an objective measure of frailty, and determine its utility as a predictor of survival after abdominal aortic aneurysm (AAA) repair., Methods: Four hundred seventy-nine patients underwent elective open AAA repair between 2000 and 2008. Two hundred sixty-two patients (54.7%) had preoperative computed tomography (CT) scans available for analysis. Cross-sectional areas of the psoas muscles at the level of the L4 vertebra were measured. The covariate-adjusted effect of psoas area on postoperative mortality was assessed using Cox regression., Results: Of the 262 patients, there were 55 deaths and the mean length of follow-up was 2.3 years. Cox regression revealed a significant association between psoas area and postoperative mortality (P = .003). The effect of psoas area was found to decrease significantly as follow-up time increased (P = .008). Among all covariates included in the Cox models (including predictors of mortality such as American Society of Anesthesiologists [ASA] score), the psoas area was the most significant., Conclusion: Core muscle size, an objective measure of frailty, correlates strongly with mortality after elective AAA repair. A better understanding of the role of frailty and core muscle size may aid in risk stratification and impact timing of surgical repair, especially in more complex aortic operations., (Copyright © 2011 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.)
- Published
- 2011
- Full Text
- View/download PDF
130. Proportional hazards regression for the analysis of clustered survival data from case-cohort studies.
- Author
-
Zhang H, Schaubel DE, and Kalbfleisch JD
- Subjects
- Computer Simulation, Data Interpretation, Statistical, Humans, Risk Assessment methods, Risk Factors, Survival Rate, Biometry methods, Cluster Analysis, Cohort Studies, Models, Statistical, Proportional Hazards Models, Regression Analysis, Survival Analysis
- Abstract
Case-cohort sampling is a commonly used and efficient method for studying large cohorts. Most existing methods of analysis for case-cohort data have concerned the analysis of univariate failure time data. However, clustered failure time data are commonly encountered in public health studies. For example, patients treated at the same center are unlikely to be independent. In this article, we consider methods based on estimating equations for case-cohort designs for clustered failure time data. We assume a marginal hazards model, with a common baseline hazard and common regression coefficient across clusters. The proposed estimators of the regression parameter and cumulative baseline hazard are shown to be consistent and asymptotically normal, and consistent estimators of the asymptotic covariance matrices are derived. The regression parameter estimator is easily computed using any standard Cox regression software that allows for offset terms. The proposed estimators are investigated in simulation studies, and demonstrated empirically to have increased efficiency relative to some existing methods. The proposed methods are applied to a study of mortality among Canadian dialysis patients., (© 2010, The International Biometric Society.)
- Published
- 2011
- Full Text
- View/download PDF
131. A positive stable frailty model for clustered failure time data with covariate-dependent frailty.
- Author
-
Liu D, Kalbfleisch JD, and Schaubel DE
- Subjects
- Computer Simulation, Humans, Risk Assessment methods, Risk Factors, Survival Rate, United States epidemiology, Biometry methods, Cluster Analysis, Data Interpretation, Statistical, Kidney Transplantation mortality, Models, Statistical, Proportional Hazards Models, Survival Analysis
- Abstract
Summary In this article, we propose a positive stable shared frailty Cox model for clustered failure time data where the frailty distribution varies with cluster-level covariates. The proposed model accounts for covariate-dependent intracluster correlation and permits both conditional and marginal inferences. We obtain marginal inference directly from a marginal model, then use a stratified Cox-type pseudo-partial likelihood approach to estimate the regression coefficient for the frailty parameter. The proposed estimators are consistent and asymptotically normal and a consistent estimator of the covariance matrix is provided. Simulation studies show that the proposed estimation procedure is appropriate for practical use with a realistic number of clusters. Finally, we present an application of the proposed method to kidney transplantation data from the Scientific Registry of Transplant Recipients., (© 2010, The International Biometric Society.)
- Published
- 2011
- Full Text
- View/download PDF
132. Double inverse-weighted estimation of cumulative treatment effects under nonproportional hazards and dependent censoring.
- Author
-
Schaubel DE and Wei G
- Subjects
- Computer Simulation, Humans, Male, Risk Assessment methods, Risk Factors, Survival Analysis, Survival Rate, United States epidemiology, Biometry methods, Cluster Analysis, Data Interpretation, Statistical, Kidney Transplantation mortality, Models, Statistical, Proportional Hazards Models, Waiting Lists mortality
- Abstract
In medical studies of time-to-event data, nonproportional hazards and dependent censoring are very common issues when estimating the treatment effect. A traditional method for dealing with time-dependent treatment effects is to model the time-dependence parametrically. Limitations of this approach include the difficulty to verify the correctness of the specified functional form and the fact that, in the presence of a treatment effect that varies over time, investigators are usually interested in the cumulative as opposed to instantaneous treatment effect. In many applications, censoring time is not independent of event time. Therefore, we propose methods for estimating the cumulative treatment effect in the presence of nonproportional hazards and dependent censoring. Three measures are proposed, including the ratio of cumulative hazards, relative risk, and difference in restricted mean lifetime. For each measure, we propose a double inverse-weighted estimator, constructed by first using inverse probability of treatment weighting (IPTW) to balance the treatment-specific covariate distributions, then using inverse probability of censoring weighting (IPCW) to overcome the dependent censoring. The proposed estimators are shown to be consistent and asymptotically normal. We study their finite-sample properties through simulation. The proposed methods are used to compare kidney wait-list mortality by race., (© 2010, The International Biometric Society.)
- Published
- 2011
- Full Text
- View/download PDF
133. The effect of salvage therapy on survival in a longitudinal study with treatment by indication.
- Author
-
Kennedy EH, Taylor JM, Schaubel DE, and Williams S
- Subjects
- Aged, Aged, 80 and over, Confounding Factors, Epidemiologic, Data Interpretation, Statistical, Humans, Longitudinal Studies, Male, Proportional Hazards Models, Prostate-Specific Antigen, Prostatic Neoplasms drug therapy, Risk Assessment, Survival Analysis, Time Factors, Treatment Outcome, Androgen Antagonists therapeutic use, Neoplasm Recurrence, Local prevention & control, Prostatic Neoplasms prevention & control, Salvage Therapy
- Abstract
We consider using observational data to estimate the effect of a treatment on disease recurrence, when the decision to initiate treatment is based on longitudinal factors associated with the risk of recurrence. The effect of salvage androgen deprivation therapy (SADT) on the risk of recurrence of prostate cancer is inadequately described by the existing literature. Furthermore, standard Cox regression yields biased estimates of the effect of SADT, since it is necessary to adjust for prostate-specific antigen (PSA), which is a time-dependent confounder and an intermediate variable. In this paper, we describe and compare two methods which appropriately adjust for PSA in estimating the effect of SADT. The first method is a two-stage method which jointly estimates the effect of SADT and the hazard of recurrence in the absence of treatment by SADT. In the first stage, PSA is predicted in the absence of SADT, and in the second stage, a time-dependent Cox model is used to estimate the benefit of SADT, adjusting for PSA. The second method, called sequential stratification, reorganizes the data to resemble a sequence of experiments in which treatment is conditionally randomized given the time-dependent covariates. Strata are formed, each consisting of a patient undergoing SADT and a set of appropriately matched controls, and analysis proceeds via stratified Cox regression. Both methods are applied to data from patients initially treated with radiation therapy for prostate cancer and give similar SADT effect estimates., (Copyright © 2010 John Wiley & Sons, Ltd.)
- Published
- 2010
- Full Text
- View/download PDF
134. Estimating treatment effects on the marginal recurrent event mean in the presence of a terminating event.
- Author
-
Schaubel DE and Zhang M
- Subjects
- Age Factors, Computer Simulation, Female, Hospitalization, Humans, Kidney Transplantation, Male, Tissue Donors, Data Interpretation, Statistical, Models, Statistical, Recurrence, Treatment Outcome
- Abstract
In biomedical studies where the event of interest is recurrent (e.g., hospitalization), it is often the case that the recurrent event sequence is subject to being stopped by a terminating event (e.g., death). In comparing treatment options, the marginal recurrent event mean is frequently of interest. One major complication in the recurrent/terminal event setting is that censoring times are not known for subjects observed to die, which renders standard risk set based methods of estimation inapplicable. We propose two semiparametric methods for estimating the difference or ratio of treatment-specific marginal mean numbers of events. The first method involves imputing unobserved censoring times, while the second methods uses inverse probability of censoring weighting. In each case, imbalances in the treatment-specific covariate distributions are adjusted out through inverse probability of treatment weighting. After the imputation and/or weighting, the treatment-specific means (then their difference or ratio) are estimated nonparametrically. Large-sample properties are derived for each of the proposed estimators, with finite sample properties assessed through simulation. The proposed methods are applied to kidney transplant data.
- Published
- 2010
- Full Text
- View/download PDF
135. Racial and ethnic disparities in access to liver transplantation.
- Author
-
Mathur AK, Schaubel DE, Gong Q, Guidinger MK, and Merion RM
- Subjects
- Adult, Black or African American statistics & numerical data, Asian statistics & numerical data, Chronic Disease, Female, Hispanic or Latino statistics & numerical data, Humans, Liver Diseases mortality, Male, Middle Aged, Patient Selection, Proportional Hazards Models, Registries, Residence Characteristics, Resource Allocation, Risk Assessment, Risk Factors, Severity of Illness Index, Time Factors, Tissue Donors supply & distribution, Tissue and Organ Procurement statistics & numerical data, United States, Waiting Lists, White People statistics & numerical data, Ethnicity statistics & numerical data, Health Services Accessibility, Healthcare Disparities ethnology, Liver Diseases ethnology, Liver Diseases surgery, Liver Transplantation ethnology, Racial Groups statistics & numerical data
- Abstract
Access to liver transplantation is reportedly inequitable for racial/ethnic minorities, but inadequate adjustments for geography and disease progression preclude any meaningful conclusions. We aimed to evaluate the association between candidate race/ethnicity and liver transplant rates after thorough adjustments for these factors and to determine how uniform racial/ethnic disparities were across Model for End-Stage Liver Disease (MELD) scores. Chronic end-stage liver disease candidates initially wait-listed between February 28, 2002 and February 27, 2007 were identified from Scientific Registry for Transplant Recipients data. The primary outcome was deceased donor liver transplantation (DDLT); the primary exposure covariate was race/ethnicity (white, African American, Hispanic, Asian, and other). Cox regression was used to estimate the covariate-adjusted DDLT rates by race/ethnicity, which were stratified by the donation service area and MELD score. With averaging across all MELD scores, African Americans, Asians, and others had similar adjusted DDLT rates in comparison with whites. However, Hispanics had an 8% lower DDLT rate versus whites [hazard ratio (HR) = 0.92, P = 0.011]. The disparity among Hispanics was concentrated among patients with MELD scores < 20, with HR = 0.84 (P = 0.021) for MELD scores of 6 to 14 and HR = 0.85 (P = 0.009) for MELD scores of 15 to 19. Asians with MELD scores < 15 had a 24% higher DDLT rate with respect to whites (HR = 1.24, P = 0.024). However, Asians with MELD scores of 30 to 40 had a 46% lower DDLT rate (HR = 0.54, P = 0.004). In conclusion, although African Americans did not have significantly different DDLT rates in comparison with similar white candidates, race/ethnicity-based disparities were prominent among subgroups of Hispanic and Asian candidates. By precluding the survival benefit of liver transplantation, this inequity may lead to excess mortality for minority candidates., ((c) 2010 AASLD.)
- Published
- 2010
- Full Text
- View/download PDF
136. Sarcopenia and mortality after liver transplantation.
- Author
-
Englesbe MJ, Patel SP, He K, Lynch RJ, Schaubel DE, Harbaugh C, Holcombe SA, Wang SC, Segev DL, and Sonnenday CJ
- Subjects
- Adult, Female, Follow-Up Studies, Humans, Incidence, Liver Transplantation adverse effects, Male, Michigan epidemiology, Middle Aged, Postoperative Period, Prognosis, Psoas Muscles diagnostic imaging, Psoas Muscles pathology, Retrospective Studies, Sarcopenia diagnosis, Sarcopenia etiology, Survival Rate trends, Time Factors, Tomography, X-Ray Computed, Liver Failure surgery, Liver Transplantation mortality, Sarcopenia epidemiology
- Abstract
Background: Surgeons frequently struggle to determine patient suitability for liver transplantation. Objective and comprehensive measures of overall burden of disease, such as sarcopenia, could inform clinicians and help avoid futile transplantations., Study Design: The cross-sectional area of the psoas muscle was measured on CT scans of 163 liver transplant recipients. After controlling for donor and recipient characteristics using Cox regression models, we described the relationship between psoas area and post-transplantation mortality., Results: Psoas area correlated poorly with Model for End-Stage Liver Disease score and serum albumin. Cox regression revealed a strong association between psoas area and post-transplantation mortality (hazard ratio = 3.7/1,000 mm(2) decrease in psoas area; p < 0.0001). When stratified into quartiles based on psoas area (holding donor and recipient characteristics constant), 1-year survival ranged from 49.7% for the quartile with the smallest psoas area to 87.0% for the quartile with the largest. Survival at 3 years among these groups was 26.4% and 77.2%, respectively. The impact of psoas area on survival exceeded that of all other covariates in these models., Conclusions: Central sarcopenia strongly correlates with mortality after liver transplantation. Such objective measures of patient frailty, such as sarcopenia, can inform clinical decision making and, potentially, allocation policy. Additional work is needed develop valid and clinically relevant measures of sarcopenia and frailty in liver transplantation., (Copyright 2010 American College of Surgeons. Published by Elsevier Inc. All rights reserved.)
- Published
- 2010
- Full Text
- View/download PDF
137. Portal vein thrombosis and liver transplant survival benefit.
- Author
-
Englesbe MJ, Schaubel DE, Cai S, Guidinger MK, and Merion RM
- Subjects
- Adult, Cohort Studies, Female, Humans, Liver Failure pathology, Liver Transplantation methods, Male, Middle Aged, Proportional Hazards Models, Tissue and Organ Procurement methods, Treatment Outcome, Waiting Lists, Liver Failure therapy, Liver Transplantation mortality, Portal Vein pathology, Venous Thrombosis mortality
- Abstract
Portal vein thrombosis (PVT) complicates the liver transplant operation and potentially affects waiting list survival. The implications on calculations of survival benefit, which balance both waiting list and posttransplant survival effects of PVT, have not been determined. The objective of this study is to describe the effect of PVT on the survival benefit of liver transplantation. Using Scientific Registry of Transplant Recipients data on adult liver transplant candidates wait-listed between September 2001 and December 2007, Cox proportional hazard models were fitted to estimate the covariate-adjusted effect of PVT on transplant rate, waiting list survival, and posttransplant survival. We then used sequential stratification to estimate liver transplant survival benefit by cross-classifications defined by Model for End-Stage Liver Disease (MELD) score and PVT status. The prevalence of reported PVT among 22,291 liver transplant recipients was 4.02% (N = 897). PVT was not a predictor of waiting list mortality (hazard ratio = 0.90, P = 0.23) but was a predictor of posttransplant mortality (hazard ratio = 1.32, P = 0.02). Overall, transplant benefit was not significantly different for patients with PVT versus without PVT (P = 0.21), but there was a shift in the benefit curve. Specifically, the threshold for transplant benefit among patients without PVT was MELD score >11 compared to MELD score >13 for patients with PVT. PVT is associated with significantly higher posttransplant mortality but does not affect waiting list mortality. Among patients with low MELD score, PVT is associated with less transplant survival benefit. Clinicians should carefully consider the risks of liver transplantation in clinically stable patients who have PVT., ((c) 2010 AASLD.)
- Published
- 2010
- Full Text
- View/download PDF
138. Kidneys from donors after cardiac death provide survival benefit.
- Author
-
Snoeijs MG, Schaubel DE, Hené R, Hoitsma AJ, Idu MM, Ijzermans JN, Ploeg RJ, Ringers J, Christiaans MH, Buurman WA, and van Heurn LW
- Subjects
- Adult, Cohort Studies, Female, Graft Survival, Humans, Kidney Failure, Chronic therapy, Male, Middle Aged, Netherlands, Renal Dialysis, Retrospective Studies, Survival Rate, Time Factors, Waiting Lists, Death, Kidney Failure, Chronic surgery, Kidney Transplantation mortality, Tissue and Organ Procurement standards
- Abstract
The continuing shortage of kidneys for transplantation requires major efforts to expand the donor pool. Donation after cardiac death (DCD) increases the number of available kidneys, but it is unknown whether patients who receive a DCD kidney live longer than patients who remain on dialysis and wait for a conventional kidney from a brain-dead donor (DBD). This observational cohort study included all 2575 patients who were registered on the Dutch waiting list for a first kidney transplant between January 1, 1999, and December 31, 2004. From listing until the earliest of death, living-donor kidney transplantation, or December 31, 2005, 459 patients received a DCD transplant and 680 patients received a DBD transplant. Graft failure during the first 3 months after transplantation was twice as likely for DCD kidneys than DBD kidneys (12 versus 6.3%; P=0.001). Standard-criteria DCD transplantation associated with a 56% reduced risk for mortality (hazard ratio 0.44; 95% confidence interval 0.24 to 0.80) compared with continuing on dialysis and awaiting a standard-criteria DBD kidney. This reduction in mortality translates into 2.4-month additional expected lifetime during the first 4 years after transplantation for recipients of DCD kidneys compared with patients who await a DBD kidney. In summary, standard-criteria DCD kidney transplantation associates with increased survival of patients who have ESRD and are on the transplant waiting list.
- Published
- 2010
- Full Text
- View/download PDF
139. The incidence of cancer in a population-based cohort of Canadian heart transplant recipients.
- Author
-
Jiang Y, Villeneuve PJ, Wielgosz A, Schaubel DE, Fenton SS, and Mao Y
- Subjects
- Adolescent, Adult, Canada, Child, Cohort Studies, Female, Humans, Incidence, Lymphoma, Non-Hodgkin complications, Lymphoma, Non-Hodgkin epidemiology, Male, Middle Aged, Mouth Neoplasms complications, Mouth Neoplasms epidemiology, Risk, Treatment Outcome, Heart Diseases complications, Heart Diseases therapy, Heart Transplantation methods, Neoplasms complications, Neoplasms epidemiology
- Abstract
To assess the long-term risk of developing cancer among heart transplant recipients compared to the Canadian general population, we carried out a retrospective cohort study of 1703 patients who received a heart transplant between 1981 and 1998, identified from the Canadian Organ Replacement Register database. Vital status and cancer incidence were determined through record linkage to the Canadian Mortality Database and Canadian Cancer Registry. Cancer incidence rates among heart transplant patients were compared to those of the general population. The observed number of incident cancers was 160 with 58.9 expected in the general population (SIR = 2.7, 95% CI = 2.3, 3.2). The highest ratios were for non-Hodgkin's lymphoma (NHL) (SIR = 22.7, 95% CI = 17.3, 29.3), oral cancer (SIR = 4.3, 95% CI = 2.1, 8.0) and lung cancer (SIR = 2.0, 95% CI = 1.2, 3.0). Compared to the general population, SIRs for NHL were particularly elevated in the first year posttransplant during more recent calendar periods, and among younger patients. Within the heart transplant cohort, overall cancer risks increased with age, and the 15-year cumulative incidence of all cancers was estimated to be 17%. There is an excess of incident cases of cancer among heart transplant recipients. The relative excesses are most marked for NHL, oral and lung cancer.
- Published
- 2010
- Full Text
- View/download PDF
140. Efficient utilization of the expanded criteria donor (ECD) deceased donor kidney pool: an analysis of the effect of labeling.
- Author
-
Hirth RA, Pan Q, Schaubel DE, and Merion RM
- Subjects
- Clinical Laboratory Techniques, Humans, Names, Probability, Registries, Research, Kidney surgery, Tissue Donors statistics & numerical data
- Abstract
We investigated the effect of the expanded criteria donor (ECD) label on (i) recovery of kidneys and (ii) acceptance for transplantation given recovery. An ECD is age > or = 60, or age 50-59 with > or = 2 of 3 specified comorbidities. Using data from the Scientific Registry of Transplant Recipients from 1999 to 2005, we modeled recovery rates through linear regression and transplantation probabilities via logistic regression, focusing on organs from donors just-younger versus just-older than the ECD age thresholds. We split the sample at July 1, 2002 to determine how decisions changed at the approximate time of implementation of the ECD definition. Before July 2002, the number of recovered kidneys with 0-1 comorbidities dropped at age 60, but transplantation probabilities given recovery did not. After July 2002, the number of recovered kidneys with 0-1 comorbidities rose at age 60, but transplantation probabilities contingent on recovery declined. No similar trends were observed at donor age 50 among donors with > or = 2 comorbidities. Overall, implementation of the ECD definition coincided with a reversal of an apparent reluctance to recover kidneys from donors over age 59, but increased selectiveness on the part of surgeons/centers with respect to these kidneys.
- Published
- 2010
- Full Text
- View/download PDF
141. Effect of pretransplant serum creatinine on the survival benefit of liver transplantation.
- Author
-
Sharma P, Schaubel DE, Guidinger MK, and Merion RM
- Subjects
- Biomarkers blood, Female, Humans, Liver Failure blood, Liver Failure mortality, Male, Middle Aged, Patient Selection, Predictive Value of Tests, Preoperative Care, Proportional Hazards Models, Registries, Renal Replacement Therapy mortality, Retrospective Studies, Risk Assessment, Severity of Illness Index, Time Factors, United States epidemiology, Up-Regulation, Waiting Lists, Creatinine blood, Liver Failure therapy, Liver Transplantation mortality, Tissue Donors supply & distribution
- Abstract
More candidates with creatinine levels >or= 2 mg/dL have undergone liver transplantation (LT) since the implementation of Model for End-Stage Liver Disease (MELD)-based allocation. These candidates have higher posttransplant mortality. This study examined the effect of serum creatinine on survival benefit among candidates undergoing LT. Scientific Registry of Transplant Recipients data were analyzed for adult LT candidates listed between September 2001 and December 2006 (n = 38,899). The effect of serum creatinine on survival benefit (contrast between waitlist and post-LT mortality rates) was assessed by sequential stratification, an extension of Cox regression. At the same MELD score, serum creatinine was inversely associated with survival benefit within certain defined MELD categories. The survival benefit significantly decreased as creatinine increased for candidates with MELD scores of 15 to 17 or 24 to 40 at LT (MELD scores of 15-17, P < 0.0001; MELD scores of 24-40, P = 0.04). Renal replacement therapy at LT was also associated with significantly decreased LT benefit for patients with MELD scores of 21 to 23 (P = 0.04) or 24 to 26 (P = 0.01). In conclusion, serum creatinine at LT significantly affects survival benefit for patients with MELD scores of 15 to 17 or 24 to 40. Given the same MELD score, patients with higher creatinine levels receive less benefit on average, and the relative ranking of a large number of wait-listed candidates with MELD scores of 15 to 17 or 24 to 40 would be markedly affected if these findings were incorporated into the allocation policy.
- Published
- 2009
- Full Text
- View/download PDF
142. Flexible estimation of differences in treatment-specific recurrent event means in the presence of a terminating event.
- Author
-
Pan Q and Schaubel DE
- Subjects
- Computer Simulation, Effect Modifier, Epidemiologic, Humans, Proportional Hazards Models, Biometry methods, Clinical Trials as Topic, Data Interpretation, Statistical, Endpoint Determination methods, Longitudinal Studies, Models, Statistical, Regression Analysis, Secondary Prevention
- Abstract
In this article, we consider the setting where the event of interest can occur repeatedly for the same subject (i.e., a recurrent event; e.g., hospitalization) and may be stopped permanently by a terminating event (e.g., death). Among the different ways to model recurrent/terminal event data, the marginal mean (i.e., averaging over the survival distribution) is of primary interest from a public health or health economics perspective. Often, the difference between treatment-specific recurrent event means will not be constant over time, particularly when treatment-specific differences in survival exist. In such cases, it makes more sense to quantify treatment effect based on the cumulative difference in the recurrent event means, as opposed to the instantaneous difference in the rates. We propose a method that compares treatments by separately estimating the survival probabilities and recurrent event rates given survival, then integrating to get the mean number of events. The proposed method combines an additive model for the conditional recurrent event rate and a proportional hazards model for the terminating event hazard. The treatment effects on survival and on recurrent event rate among survivors are estimated in constructing our measure and explain the mechanism generating the difference under study. The example that motivates this research is the repeated occurrence of hospitalization among kidney transplant recipients, where the effect of expanded criteria donor (ECD) compared to non-ECD kidney transplantation on the mean number of hospitalizations is of interest.
- Published
- 2009
- Full Text
- View/download PDF
143. Effect of alcoholic liver disease and hepatitis C infection on waiting list and posttransplant mortality and transplant survival benefit.
- Author
-
Lucey MR, Schaubel DE, Guidinger MK, Tome S, and Merion RM
- Subjects
- Adult, Cohort Studies, Hepatitis C surgery, Humans, Liver Diseases, Alcoholic surgery, Proportional Hazards Models, Hepatitis C mortality, Liver Diseases, Alcoholic mortality, Liver Transplantation mortality, Waiting Lists
- Abstract
Unlabelled: Disease-specific analysis of liver transplant survival benefit, which encompasses both pre- and posttransplant events, has not been reported. Therefore, we evaluated the effect of alcoholic liver disease (ALD) and hepatitis C virus (HCV) infection on waiting list mortality, posttransplant mortality, and the survival benefit of deceased donor liver transplantation using United States data from the Scientific Registry of Transplant Recipients on 38,899 adults placed on the transplant waiting list between September 2001 and December 2006. Subjects were classified according to the presence/absence of HCV and ALD. Cox regression was used to estimate waiting list mortality and posttransplant mortality separately. Survival benefit was assessed using sequential stratification. Overall, the presence of HCV significantly increased waiting list mortality, with a covariate-adjusted hazard ratio (HR) for HCV-positive (HCV+) compared with HCV-negative (HCV-) HR = 1.19 (P = 0.0001). The impact of HCV+ was significantly more pronounced (P = 0.001) among ALD-positive (ALD+) patients (HR = 1.36; P < 0.0001), but was still significant among ALD-negative (ALD-) patients (HR = 1.11; P = 0.02). The contrast between ALD+ and ALD- waiting list mortality was significant only among HCV+ patients (HR = 1.14; P = 0.006). Posttransplant mortality was significantly increased among HCV+ (versus HCV-) patients (HR = 1.26; P = 0.0009), but not among ALD+ (versus ALD-) patients. Survival benefit of transplantation was significantly decreased among HCV+ compared with HCV- recipients with model for end-stage liver disease (MELD) scores 9-29, but was significantly increased at MELD >or=30. ALD did not influence the survival benefit of transplantation at any MELD score., Conclusion: Except in patients with very low or very high MELD scores, HCV status has a significant negative impact on the survival benefit of liver transplantation. In contrast, the presence of ALD does not influence liver transplant survival benefit.
- Published
- 2009
- Full Text
- View/download PDF
144. A comprehensive risk quantification score for deceased donor kidneys: the kidney donor risk index.
- Author
-
Rao PS, Schaubel DE, Guidinger MK, Andreoni KA, Wolfe RA, Merion RM, Port FK, and Sung RS
- Subjects
- Adolescent, Adult, Cadaver, Creatinine blood, Female, Graft Rejection epidemiology, Graft Rejection mortality, Graft Survival, History, 16th Century, Humans, Kidney Transplantation mortality, Male, Middle Aged, Proportional Hazards Models, Retrospective Studies, Young Adult, Kidney Transplantation adverse effects, Risk Assessment, Tissue Donors
- Abstract
Background: We propose a continuous kidney donor risk index (KDRI) for deceased donor kidneys, combining donor and transplant variables to quantify graft failure risk., Methods: By using national data from 1995 to 2005, we analyzed 69,440 first-time, kidney-only, deceased donor adult transplants. Cox regression was used to model the risk of death or graft loss, based on donor and transplant factors, adjusting for recipient factors. The proposed KDRI includes 14 donor and transplant factors, each found to be independently associated with graft failure or death: donor age, race, history of hypertension, history of diabetes, serum creatinine, cerebrovascular cause of death, height, weight, donation after cardiac death, hepatitis C virus status, human leukocyte antigen-B and DR mismatch, cold ischemia time, and double or en bloc transplant. The KDRI reflects the rate of graft failure relative to that of a healthy 40-year-old donor., Results: Transplants of kidneys in the highest KDRI quintile (>1.45) had an adjusted 5-year graft survival of 63%, compared with 82% and 79% in the two lowest KDRI quintiles (<0.79 and 0.79-<0.96, respectively). There is a considerable overlap in the KDRI distribution by expanded and nonexpanded criteria donor classification., Conclusions: The graded impact of KDRI on graft outcome makes it a useful decision-making tool at the time of the deceased donor kidney offer.
- Published
- 2009
- Full Text
- View/download PDF
145. Who should get a liver graft?
- Author
-
Freeman RB, Jamieson N, Schaubel DE, Porte RJ, and Villamil FG
- Subjects
- Carcinoma, Hepatocellular pathology, Carcinoma, Hepatocellular surgery, Health Care Rationing, Humans, Liver Failure surgery, Liver Neoplasms pathology, Liver Neoplasms surgery, Liver Transplantation mortality, Neoplasm Staging, Patient Dropouts, Resource Allocation methods, Resource Allocation standards, Risk Factors, Survival Analysis, Survivors, Tissue and Organ Procurement methods, Treatment Failure, United Kingdom, Liver Transplantation statistics & numerical data, Patient Selection
- Published
- 2009
- Full Text
- View/download PDF
146. Survival benefit-based deceased-donor liver allocation.
- Author
-
Schaubel DE, Guidinger MK, Biggins SW, Kalbfleisch JD, Pomfret EA, Sharma P, and Merion RM
- Subjects
- Follow-Up Studies, Humans, Liver Diseases classification, Liver Diseases mortality, Liver Diseases surgery, Liver Transplantation mortality, Reoperation statistics & numerical data, Survival Rate, Survivors, Tissue Donors statistics & numerical data, Waiting Lists, Life Expectancy, Liver Transplantation statistics & numerical data, Resource Allocation statistics & numerical data, Tissue Donors supply & distribution
- Abstract
Currently, patients awaiting deceased-donor liver transplantation are prioritized by medical urgency. Specifically, wait-listed chronic liver failure patients are sequenced in decreasing order of Model for End-stage Liver Disease (MELD) score. To maximize lifetime gained through liver transplantation, posttransplant survival should be considered in prioritizing liver waiting list candidates. We evaluate a survival benefit based system for allocating deceased-donor livers to chronic liver failure patients. Under the proposed system, at the time of offer, the transplant survival benefit score would be computed for each patient active on the waiting list. The proposed score is based on the difference in 5-year mean lifetime (with vs. without a liver transplant) and accounts for patient and donor characteristics. The rank correlation between benefit score and MELD score is 0.67. There is great overlap in the distribution of benefit scores across MELD categories, since waiting list mortality is significantly affected by several factors. Simulation results indicate that over 2000 life-years would be saved per year if benefit-based allocation was implemented. The shortage of donor livers increases the need to maximize the life-saving capacity of procured livers. Allocation of deceased-donor livers to chronic liver failure patients would be improved by prioritizing patients by transplant survival benefit.
- Published
- 2009
- Full Text
- View/download PDF
147. Evaluating bias correction in weighted proportional hazards regression.
- Author
-
Pan Q and Schaubel DE
- Subjects
- Bias, Computer Simulation, Humans, Kidney Transplantation mortality, Patient Selection, Population Density, Probability, Reproducibility of Results, Survival Analysis, Survivors, Kidney Transplantation physiology, Proportional Hazards Models, Regression Analysis
- Abstract
Often in observational studies of time to an event, the study population is a biased (i.e., unrepresentative) sample of the target population. In the presence of biased samples, it is common to weight subjects by the inverse of their respective selection probabilities. Pan and Schaubel (Can J Stat 36:111-127, 2008) recently proposed inference procedures for an inverse selection probability weighted (ISPW) Cox model, applicable when selection probabilities are not treated as fixed but estimated empirically. The proposed weighting procedure requires auxiliary data to estimate the weights and is computationally more intense than unweighted estimation. The ignorability of sample selection process in terms of parameter estimators and predictions is often of interest, from several perspectives: e.g., to determine if weighting makes a significant difference to the analysis at hand, which would in turn address whether the collection of auxiliary data is required in future studies; to evaluate previous studies which did not correct for selection bias. In this article, we propose methods to quantify the degree of bias corrected by the weighting procedure in the partial likelihood and Breslow-Aalen estimators. Asymptotic properties of the proposed test statistics are derived. The finite-sample significance level and power are evaluated through simulation. The proposed methods are then applied to data from a national organ failure registry to evaluate the bias in a post-kidney transplant survival model.
- Published
- 2009
- Full Text
- View/download PDF
148. Liver transplantation and subsequent risk of cancer: findings from a Canadian cohort study.
- Author
-
Jiang Y, Villeneuve PJ, Fenton SS, Schaubel DE, Lilly L, and Mao Y
- Subjects
- Adolescent, Adult, Age Factors, Aged, Canada, Child, Cohort Studies, Female, Follow-Up Studies, Humans, Lymphoma, Non-Hodgkin etiology, Male, Middle Aged, Risk, Treatment Outcome, Liver Transplantation adverse effects, Neoplasms etiology
- Abstract
Characterization of the long-term cancer risks among liver transplant patients has been hampered by the paucity of sufficiently large cohorts. The increase over time in the number of liver transplants coupled with improved survival underscores the need to better understand associated long-term health effects. This is a cohort study whose subjects were assembled with data from the population-based Canadian Organ Replacement Registry. Analyses are based on 2034 patients who received a liver transplant between June 1983 and October 1998. Incident cases of cancer were identified through record linkage to the Canadian Cancer Registry. We compared site-specific cancer incidence rates in the cohort and the general Canadian population by using the standardized incidence ratio (SIR). Stratified analyses were performed to examine variations in risk according to age at transplantation, sex, time since transplantation, and year of transplantation. Liver transplant recipients had cancer incidence rates that were 2.5 times higher than those of the general population [95% confidence interval (CI) = 2.1, 3.0]. The highest SIR was observed for non-Hodgkin's lymphoma (SIR = 20.8, 95% CI = 14.9, 28.3), whereas a statistically significant excess was observed for colorectal cancer (SIR = 2.6, 95% CI = 1.4, 4.4). Risks were more pronounced during the first year of follow-up and among younger transplant patients. In conclusion, our findings indicate that liver transplant patients face increased risks of developing cancer with respect to the general population. Increased surveillance in this patient population, particularly in the first year following transplantation, and screening for colorectal cancer with modalities for which benefits are already well recognized should be pursued.
- Published
- 2008
- Full Text
- View/download PDF
149. Re-weighting the model for end-stage liver disease score components.
- Author
-
Sharma P, Schaubel DE, Sima CS, Merion RM, and Lok AS
- Subjects
- Adolescent, Adult, Aged, Aged, 80 and over, Creatinine metabolism, Female, Graft Rejection epidemiology, Humans, Incidence, Liver Failure complications, Liver Failure surgery, Male, Middle Aged, Proportional Hazards Models, Renal Insufficiency diagnosis, Renal Insufficiency metabolism, Retrospective Studies, Survival Rate trends, United States epidemiology, Young Adult, Graft Rejection prevention & control, Liver Failure diagnosis, Liver Transplantation standards, Renal Insufficiency complications, Resource Allocation methods, Severity of Illness Index, Waiting Lists
- Abstract
Background & Aims: Liver transplant candidates with mild hepatic synthetic dysfunction and marked renal insufficiency may have higher Model for End-Stage Liver Disease (MELD) scores than candidates with severe liver disease and normal renal function. We re-estimated MELD coefficients and evaluated the effect of updated MELD on the liver transplant waiting list ranking., Methods: Scientific Registry of Transplant Recipients data was analyzed for 38,899 adults wait-listed between September, 2001 and December, 2006. A time-dependent Cox regression waiting list mortality model estimated updated MELD component coefficients. Rank correlation between existing and updated MELD scores was computed., Results: Existing MELD component coefficient (log(e) creatinine, 0.957 vs 1.266 [95% confidence interval (CI), 1.21-1.32]; log(e) bilirubin, 0.378 vs 0.939 [95% CI, 0.91-0.97]; log(e) international normalized ratio, 1.120 vs 1.658 [95% CI, 1.58-1.74]) was significantly different than updated counterpart. Index of concordance was higher for updated MELD than existing MELD for predicting overall (0.68 vs. 0.64) and 90-day waiting list mortality (0.77 vs. 0.75). Rank correlation between existing and updated MELD scores was 0.95 for all candidates and 0.72 for candidates with existing MELD >or=20. Among candidates with equal existing MELD, those with lower creatinine and higher bilirubin had significantly higher waiting list mortality., Conclusions: Existing MELD coefficient components are significantly different than those calculated from national waiting list data. Updated MELD assigns lower weight to creatinine and international normalized ratio and higher weight to bilirubin. Updated MELD better predicts waiting list mortality. Using updated MELD for liver allocation would alter waiting list candidate ranking.
- Published
- 2008
- Full Text
- View/download PDF
150. Estimating cumulative treatment effects in the presence of nonproportional hazards.
- Author
-
Wei G and Schaubel DE
- Subjects
- Confidence Intervals, Humans, Kidney Failure, Chronic mortality, Kidney Failure, Chronic therapy, Peritoneal Dialysis statistics & numerical data, Registries statistics & numerical data, Renal Dialysis statistics & numerical data, Survival Analysis, Biometry methods, Proportional Hazards Models, Treatment Outcome
- Abstract
Often in medical studies of time to an event, the treatment effect is not constant over time. In the context of Cox regression modeling, the most frequent solution is to apply a model that assumes the treatment effect is either piecewise constant or varies smoothly over time, i.e., the Cox nonproportional hazards model. This approach has at least two major limitations. First, it is generally difficult to assess whether the parametric form chosen for the treatment effect is correct. Second, in the presence of nonproportional hazards, investigators are usually more interested in the cumulative than the instantaneous treatment effect (e.g., determining if and when the survival functions cross). Therefore, we propose an estimator for the aggregate treatment effect in the presence of nonproportional hazards. Our estimator is based on the treatment-specific baseline cumulative hazards estimated under a stratified Cox model. No functional form for the nonproportionality need be assumed. Asymptotic properties of the proposed estimators are derived, and the finite-sample properties are assessed in simulation studies. Pointwise and simultaneous confidence bands of the estimator can be computed. The proposed method is applied to data from a national organ failure registry.
- Published
- 2008
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.