87 results on '"Koch, Gary G."'
Search Results
2. Sensitivity analysis for missing dichotomous outcome data in multi-visit randomized clinical trial with randomization-based covariance adjustment.
- Author
-
Li, Siying, Koch, Gary G., Preisser, John S., Lam, Diana, and Sanchez-Kam, Matilde
- Subjects
- *
CLINICAL trials , *RANDOMIZATION (Statistics) , *ANALYSIS of covariance , *HEALTH outcome assessment , *SCIENTIFIC observation - Abstract
Dichotomous endpoints in clinical trials have only two possible outcomes, either directly or via categorization of an ordinal or continuous observation. It is common to have missing data for one or more visits during a multi-visit study. This paper presents a closed form method for sensitivity analysis of a randomized multi-visit clinical trial that possibly has missing not at random (MNAR) dichotomous data. Counts of missing data are redistributed to the favorable and unfavorable outcomes mathematically to address possibly informative missing data. Adjusted proportion estimates and their closed form covariance matrix estimates are provided. Treatment comparisons over time are addressed with Mantel-Haenszel adjustment for a stratification factor and/or randomization-based adjustment for baseline covariables. The application of such sensitivity analyses is illustrated with an example. An appendix outlines an extension of the methodology to ordinal endpoints. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
3. Randomization-based adjustment of multiple treatment hazard ratios for covariates with missing data.
- Author
-
Lam, Diana, Koch, Gary G., Preisser, John S., Saville, Benjamin R., and Hussey, Michael A.
- Subjects
- *
CLINICAL trials , *TREATMENT effectiveness , *RANDOMIZATION (Statistics) , *ANALYSIS of covariance , *CONFIDENCE - Abstract
Clinical trials are designed to compare treatment effects when applied to samples from the same population. Randomization is used so that the samples are not biased with respect to baseline covariates that may influence the efficacy of the treatment. We develop randomization-based covariance adjustment methodology to estimate the log hazard ratios and their confidence intervals of multiple treatments in a randomized clinical trial with time-to-event outcomes and missingness among the baseline covariates. The randomization-based covariance adjustment method is a computationally straight-forward method for handling missing baseline covariate values. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
4. Randomization-Based Adjustment of Treatment Hazard Ratio for Covariates With Missing Data.
- Author
-
Lam, Diana, Koch, Gary G., Preisser, John S., and Saville, Benjamin R.
- Subjects
- *
CLINICAL trials , *MISSING data (Statistics) , *PROPORTIONAL hazards models - Abstract
Clinical trials are designed to evaluate treatment effects while taking into account how covariates such as age and gender may influence the comparison between treatments. Including covariates in the model to evaluate treatment effects on time to event outcomes presents complications for a regulatory clinical trial because the covariates may need to meet modeling assumptions. We provide methodology to estimate the hazard ratio for treatments in a randomized trial with time to event outcomes and missingness among the baseline covariates by adjusting for the covariates in a randomization-based way. Such adjustment for covariates is an attractive methodology in the regulatory setting as it requires only minimal assumptions. The method is illustrated for data from an oncology clinical trial. Its application is computationally straightforward for managing missing data among the baseline covariates, and its results for the illustrative clinical trial are similar to those from multiple imputation for missing covariate data. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
5. Commentary for the Missing Data Working Group's perspective for regulatory clinical trials, estimands, and sensitivity analyses.
- Author
-
Koch, Gary G. and Wiener, Laura Elizabeth
- Published
- 2016
- Full Text
- View/download PDF
6. Nonparametric randomization-based covariate adjustment for stratified analysis of time-to-event or dichotomous outcomes.
- Author
-
Hussey, Michael A., Koch, Gary G., Preisser, John S., and Saville, Benjamin R.
- Subjects
- *
CLINICAL trials , *PROPORTIONAL hazards models , *LOG-rank test , *THERAPEUTICS research , *NONPARAMETRIC estimation - Abstract
Time-to-event or dichotomous outcomes in randomized clinical trials often have analyses using the Cox proportional hazards model or conditional logistic regression, respectively, to obtain covariate-adjusted log hazard (or odds) ratios. Nonparametric Randomization-Based Analysis of Covariance (NPANCOVA) can be applied to unadjusted log hazard (or odds) ratios estimated from a model containing treatment as the only explanatory variable. These adjusted estimates are stratified population-averaged treatment effects and only require a valid randomization to the two treatment groups and avoid key modeling assumptions (e.g., proportional hazards in the case of a Cox model) for the adjustment variables. The methodology has application in the regulatory environment where such assumptions cannot be verified a priori. Application of the methodology is illustrated through three examples on real data from two randomized trials. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
7. Comment.
- Author
-
Koch, Gary G.
- Subjects
- *
HYPOGLYCEMIC agents , *CLINICAL trials - Abstract
This commentary discusses the article by Marchenko et al., and it provides some additional remarks for issues identified in that article. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
8. Applications of Extensions of Bivariate Rank Sum Statistics to the Crossover Design to Compare Two Treatments Through Four Sequence Groups.
- Author
-
Kawaguchi, Atsushi, Koch, Gary G., and Ramaswamy, Ratna
- Subjects
- *
STATISTICS , *MATHEMATICAL sequences , *CLINICAL trials , *OPHTHALMOLOGY , *THERAPEUTICS , *PLACEBOS , *SIMULATION methods & models - Abstract
This article describes applications of extensions of bivariate rank sum statistics to the crossover design with four sequence groups for two treatments. A randomized clinical trial in ophthalmology provides motivating background for the discussion. The bilateral design for this study has four sequence groups T:T, T:P, P:T, and P:P, respectively, for T as test treatment or P as placebo in the corresponding order for the left and right eyes. This article describes how to use the average of the separate Wilcoxon rank sum statistics for the left and right eyes for the overall comparison between T and P with the correlation between the two eyes taken into account. An extension of this criterion with better sensitivity to potential differences between T and P through reduction of the applicable variance has discussion in terms of a conceptual model with constraints for within-side homogeneity of groups with the same treatment and between-side homogeneity of the differences between T and P. Goodness of fit for this model can have assessment with test statistics for its corresponding constraints. Simulation studies for the conceptual model confirm better power for the extended test statistic with its full invocation than other criteria without this property. The methods summarized here are illustrated for the motivating clinical trial in ophthalmology, but they are applicable to other situations with the crossover design with four sequence groups for either two locations for two treatments at the same time for a patient or two successive periods for the assigned treatments for a recurrent disorder. This article also notes that the methods based on its conceptual model can have unsatisfactory power for departures from that model where the difference between T and P via the T:T and P:P groups is not similar to that via the T:P and P:T groups, as might occur when T has a systemic effect in a bilateral trial. For this situation, more robust test statistics have identification, but there is recognition that the parallel groups design with only the T:T and P:P groups may be more useful than the bilateral design with four sequence groups. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
9. Comments on 'Current issues in non-inferiority trials' by Thomas R. Fleming, Statistics in Medicine, DOI: 10.1002/sim.2855.
- Author
-
Koch, Gary G.
- Published
- 2008
- Full Text
- View/download PDF
10. Statistical consideration of the strategy for demonstrating clinical evidence of effectiveness-one larger vs two smaller pivotal studies by Z. Shun, E. Chi, S. Durrleman and L. Fisher, Statistics in Medicine 2005; 24:1619-1637.
- Author
-
Koch, Gary G.
- Published
- 2005
- Full Text
- View/download PDF
11. Antidiabetic Drugs and Heart Failure Risk in Patients With Type 2 Diabetes in the U.K. Primary Care Setting.
- Author
-
Maru, Shoko, Koch, Gary G., Stender, Monika, Clark, Douglas, Gibowski, Laura, Petri, Hans, White, Alice D., and Simpson JR., Ross J.
- Subjects
- *
HYPOGLYCEMIC agents , *DRUGS , *HYPOGLYCEMIA , *HEART failure , *TYPE 2 diabetes , *DIABETES - Abstract
OBJECTIVE -- To assess the effects of antidiabetic drugs on the risk of heart failure in patients with type 2 diabetes. RESEARCH DESIGN AND METHODS -- We conducted a retrospective cohort study with a newly diagnosed diabetes cohort of 25,690 patients registered in the U.K. General Practice Research Database, 1988-1999. We categorized person-time drug exposures to monotherapies in insulin, sulfonylureas (SUs), metformins, and other oral hypoglycemic agents (i.e., acarbose, guar gum)and combination therapy including insulin, combination therapy without insulin, and triple combination therapy with or without insulin. A drug-free time interval served as a reference category. Cox interval-wise (piece-wise) regression analyses were used. The main outcome was incident heart failure. RESULTS -- Among 43,390 drug exposure intervals for 25,690 patients who had a mean follow-up period of 2.5 years, 1,409 patients developed heart failure. Heart failure occurred most frequently in SU m0notherapy exposure. After adjusting for duration of diabetes, the timing and order of treatments received, and known risk factors for heart failure; we found no differential effects among type-specific therapies. Patients with any drug use within the first year after diabetes diagnosis had a 4.75-fold higher risk (hazard ratio) for heart failure than those with drug-free status but had no increased risk during subsequent years. CONCLUSIONS -- In conclusion, the use of any pharmacological therapy for type 2 diabetes appears to be associated with an increased risk of heart failure. This risk does not persist beyond the first year after diagnosis of diabetes and does not appear to differ among the types of drug therapy examined. This observation suggests that the severity of diabetes or the preclinical duration of diabetes and the need for drug therapy, and not the therapy itself, is an explanation for heart failure in patients with type 2 diabetes. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
12. An international survey assessing the effects of the duration of attack-free period on health-related quality of life for patients with hereditary angioedema.
- Author
-
Itzler, Robbin, Lumry, William R., Sears, John, Braverman, Julia, Li, Yinglei, Brennan, Caroline J., and Koch, Gary G.
- Subjects
- *
QUALITY of life , *ANGIONEUROTIC edema , *ANALYSIS of covariance , *COMPLEMENT (Immunology) , *GASTROINTESTINAL system - Abstract
Background: Hereditary angioedema (HAE) is characterized by unpredictable and often severe cutaneous and mucosal swelling that affects the extremities, face, larynx, gastrointestinal tract, or genitourinary area. Introduction of novel long-term prophylactic treatment options (lanadelumab, berotralstat, and C1-esterase inhibitor SC [human]) into the treatment armamentarium has substantially reduced HAE attacks, allowing patients to be attack free for longer with improvements to their quality of life. Using data drawn from a wide-ranging survey of patients with HAE, we examined the relationship between duration of time attack free and health-related quality of life (HRQoL), exploring the possibility that there is an association between observed improvement in HRQoL and attack-free duration. Methods: A survey among patients with HAE on long-term prophylaxis (LTP) in six countries (the US, Australia, Canada, UK, Germany, and Japan) assessed the relationship between attack-free duration and mean Angioedema Quality of Life (AE-QoL) scores, quality of life benefits, and rescue medication used. Analysis of covariance (ANCOVA) was used to assess the roles of LTP and attack-free period (< 1 month, 1– < 6 months, ≥ 6 months) on total AE-QoL scores. Results include descriptive p-values for strength of association, without control for multiplicity. Descriptive statistics were used to show the relationship between time attack free and quality of life benefits. Results: Longer durations of time for which participants reported being attack free at the time of the survey correlated with better AE-QoL scores and less use of rescue medication. The mean total AE-QoL scores were 51.8, 33.2, and 19.9 for those who reported having been attack free for < 1 month, 1– < 6 months, and ≥ 6 months, respectively, with higher scores reflecting more impairment. The ANCOVA results showed a strong association between attack-free duration and AE-QoL total score. Conclusion: This study shows that longer attack-free duration has an influential role for better HRQoL in patients receiving LTP. Prolonging the attack-free period is an important goal of therapy and recent advances in LTP have increased attack-free duration. However, opportunities exist for new treatments to further increase attack-free duration and improve HRQoL for all patients with HAE. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Cognitive behavior therapy for insomnia for untreated hypertension with comorbid insomnia disorder: The SLEEPRIGHT clinical trial.
- Author
-
Sherwood, Andrew, Ulmer, Christi, Wu, Jade Q., Blumenthal, James A., Herold, Emma, Smith, Patrick J., Koch, Gary G., Johnson, Kristy, Viera, Anthony, Edinger, Jack, and Hinderliter, Alan
- Abstract
Insomnia and poor sleep are associated with an increased risk of developing cardiovascular disease (CVD) and its precursors, including hypertension. In 2022, the American Heart Association (AHA) added inadequate sleep to its list of health behaviors that increase the risk for CVD. It remains unknown, however, whether the successful treatment of insomnia and inadequate sleep can reduce heightened CVD risk. SLEEPRIGHT is a single‐site, prospective clinical trial designed to evaluate whether the successful treatment of insomnia results in improved markers of CVD risk in patients with untreated hypertension and comorbid insomnia disorder. Participants (N = 150) will undergo baseline assessments, followed by a 6‐week run‐in period after which they will receive cognitive behavior therapy for insomnia (CBT‐I), comprised of 6 hourly sessions with an experienced CBT‐I therapist over a 6‐week period. In addition to measures of insomnia severity, as well as both subjective and objective measures of sleep, the primary outcome measures are nighttime blood pressure (BP) and BP dipping assessed by 24‐h ambulatory BP monitoring (ABPM). Secondary outcomes include several CVD risk biomarkers, including clinic BP, lipid profile, vascular endothelial function, arterial stiffness, and sympathetic nervous system (SNS) activity. Data analysis will evaluate the association between improvements in insomnia and sleep with primary and secondary CVD risk biomarker outcomes. The SLEEPRIGHT trial (ClinicalTrials.Gov NCT04009447) will utilize CBT‐I, the current gold standard treatment for insomnia disorder, to evaluate whether reducing insomnia severity and improving sleep are accompanied by improved biomarkers of CVD risk in patients with untreated hypertension. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Strategies and issues for the analysis of ordered categorical...
- Author
-
Koch, Gary G. and Tangen, Catherine
- Subjects
- *
ELECTRIC welding , *STATISTICS - Abstract
Proposes the accumulation analysis (AA) as a method that applies and provide reasonable results of experiments. Components of the general methods, Mantel-Haenszel tests, methods for fitting models to linear statistics and methods for fitting models to distribution in multifactor studies; Arc-welding experiment; Contact-stain experiment; Post-etch window size experiment; Overview of methods for ordered categorical data.
- Published
- 1990
- Full Text
- View/download PDF
15. The Equivalence of Parameter Estimates From Growth Curve Models and Seemingly Unrelated Regression Models.
- Author
-
Stanek III, Edward J. and Koch, Gary G.
- Subjects
- *
PARAMETER estimation , *REGRESSION analysis - Abstract
Comments on the equivalence of parameter estimates from growth curve models and regression models. Need of special estimation techniques on multivariate models; Use of two-stage Aitken estimation in unrelated regression models; Implication on the equivalence of multivariate models.
- Published
- 1985
- Full Text
- View/download PDF
16. Maintenance therapy with sucralfate in duodenal ulcer: Genuine prevention of accelerated healing of ulcer recurrence?
- Author
-
Bynum, T. Edward and Koch, Gary G.
- Subjects
- *
SUCRALFATE , *DUODENAL ulcers , *PEPTIC ulcer , *ULCER treatment , *STOMACH ulcers , *GASTROINTESTINAL diseases , *PLACEBOS , *CLINICAL trials , *DRUG efficacy , *THERAPEUTICS - Abstract
We sought to compare the efficacy of sucralfate to placebo for the prevention of duodenal ulcer recurrence and to determine that the efficacy of sucralfate was due to a true reduction in ulcer prevalence and not due to secondary effects such as analgesic activity or accelerated healing. This was a double-blind, randomized, placebo-controlled, parallel groups, multicenter clinical study with 254 patients. All patients had a past history of at least two duodenal ulcers with at least one ulcer diagnosed by endoscopic examination 3 months or less before the start of the study. Complete ulcer healing without erosions was required to enter the study. Sucralfate or placebo were dosed as a 1-g tablet twice a day for 4 months, or until ulcer recurrence. Endoscopic examinations once a month and when symptoms developed determined the presence or absence of duodenal ulcers. If a patient developed an ulcer between monthly scheduled visits, the patient was dosed with a 1-g sucralfate tablet twice a day until the next scheduled visit. Statistical analyses of the results determined the efficacy of sucralfate compared with placebo for preventing duodenal ulcer recurrence. Comparisons of therapeutic agents for preventing duodenal ulcers have usually been made by testing for statistical differences in the cumulative rates for all ulcers developed during a follow-up period, regardless of the time of detection. Statistical experts at the United States Food and Drug Administration (FDA) and on the FDA Advisory Panel expressed doubts about clinical study results based on this type of analysis. They suggested three possible mechanisms for reducing the number of observed ulcers: (a) analgesic effects, (b) accelerated healing, and (c) true ulcer prevention. Traditional ulcer analysis could miss recurring ulcers due to an analgesic effect or accelerated healing. Point-prevalence analysis could miss recurring ulcers due to accelerated healing between endoscopic examinations. Maximum ulcer a... [ABSTRACT FROM PUBLISHER]
- Published
- 1991
- Full Text
- View/download PDF
17. Incidence of attachment loss over 3 years in older adults - new and progressing lesions.
- Author
-
Beck, James D., Koch, Gary G., and Offenbacher, Steven
- Subjects
- *
TEETH injuries , *DISEASE risk factors , *DENTAL emergencies , *PROGNOSIS , *SYMPTOMS , *DENTURE attachments , *PERIODONTITIS , *ADULTS - Abstract
Researchers attempting to identify and quantify risk factors have not paid adequate attention to whether or not they actually are identifying risk factors and prognostic factors. The purpose of this paper is to present the incidence of attachment loss in people who have attachment loss in sites previously without disease and people who experience further progression of sites with disease, and to compare and contrast the characteristics of people with the two types of attachment loss. The subjects used for this study are a random sample of community-dwelling older adults residing in five contiguous North Carolina counties who were followed for 3 yr. The subjects were categorized into four groups according to the type of clinical attachment loss experienced, those who only had attachment loss in previously undiseased sites, those with progression of attachment loss in previously diseased sites, those who experienced both types of attachment loss, and those who had no new sites of attachment loss. A bivariate logistic model was developed to identify the characteristics associated with "new" disease onset as compared to "progression of disease". Just over 40% of the people had no change in their baseline attachment level. 27.5% of the people experienced only new lesions, 11.1% of the people only experienced clinical attachment loss in sites that had clinical attachment loss at baseline, and 20.1% experienced both kinds of clinical attachment loss. Low income, taking medications associated with soft tissue reactions, smokeless tobacco users and those who experience a history of oral pain were at greater risk for new lesions. People at higher risk for disease progression were low income, taking medications that may result in soft tissue reactions, cigarette smokers. The model indicates that the characteristics are different enough that periodontitis may be like other diseases in which risk factors and prognostic factors are not the same. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
18. CATEGORICAL DATA ANALYSIS IN PUBLIC HEALTH.
- Author
-
Preisser, John S. and Koch, Gary G.
- Subjects
- *
PUBLIC health research , *LOGISTIC regression analysis , *ESTIMATION theory , *LEAST squares - Abstract
A greater variety of categorical data methods are used today than 15 years ago. This article surveys categorical data methods widely applied in public health research. Whereas large sample chi-square methods, logistic regression analysis, and weighted least squares modeling of repeated measures once comprised the primary analytic tools for categorical data problems, today's methodology is comprised of a much broader range of tools made available by increasing computational efficiency. These include computational algorithms for exact inference of small samples and sparsely distributed data, conditional logistic regression for modeling highly stratified data, and generalized estimating equations for cluster samples. The latter, in particular, has found wide use in modeling the marginal probabilities of correlated counted, binary, and multinomial outcomes. The various methods are illustrated with examples including a study of the prevalence of cerebral palsy in very low birthweight infants and a study of cancer screening in primary care settings. [ABSTRACT FROM AUTHOR]
- Published
- 1997
- Full Text
- View/download PDF
19. Estimating Activity Limitation in the Noninstitutionalized Population: A Method for Small Areas.
- Author
-
Lafata, Jennifer Elston, Koch, Gary G., and Weissert, William G.
- Subjects
- *
POPULATION , *REGRESSION analysis , *AGE , *HUMAN sexuality , *RACE , *SOCIAL status - Abstract
Although reliable direct state and local estimates of the activity-limited population are frequently unavailable, regression-adjusted synthetic estimates can be made. Such estimates use multivariate methods to model activity limitation at the national level and then apply model-predicted probabilities to corresponding community-specific demographic data. Methods. using the 1989 National Health Interview Survey and the 1991 Area Resource File System, this study produced log-linear regression models that included person-level demographic and county-level contextual variables as predictors of activity limitation. Model-predicted rates were then multiplied by corresponding intercensal population data to generate state and local synthetic estimates of activity limitation. Results. Rates of activity limitation generally were found to increase with age and as the socioeconomic conditions of the county in which an individual resided worsened. Race and sex also tended to be statistically significant predictors of activity limitation. Conclusions. Activity limitation can be effectively modeled by age, sex, race, and community socioeconomic status. Synthetic estimates such as these are relatively simple to generate and can be useful for small-area planning in the absence of direct local estimates. [ABSTRACT FROM AUTHOR]
- Published
- 1994
- Full Text
- View/download PDF
20. Regression-Adjusted Small Area Estimates of Functional Dependency in the Noninstitutionalized American Population Age 65 and Over.
- Author
-
Elston, Jennifer M., Koch, Gary G., and Weissert, William G.
- Subjects
- *
ELDER care , *LONG-term health care , *HEALTH planning , *WELFARE dependency , *CENSUS , *HEALTH surveys , *PUBLIC health , *MULTIVARIATE analysis , *POVERTY - Abstract
Health planning efforts for the population age 65 and over have been hampered continually by the lack of reliable estimates of the noninstitutionalized long-term care population. Until recently national estimates were virtually nonexistent, and reliable small area estimates remain unavailable. However, with the recent publication of several national surveys and the 1990 Census, synthetic estimates can be made for states and counties by using multivariate methods to model functional dependency at the national level, and then applying the predicted probabilities to corresponding state and county data. Using the 1984 National Health Interview Survey's Supplement on Aging and the 1986 Area Health Resources File System, we have produced log-linear regression models that include demographic and contextual variables as predictors of functional dependency among the noninstitutionalized population age 65 and over. Age, sex, race, and the percent of the 65 and over population who reside in poverty were found to be significant predictors of functional dependency. Applying these models to 1986 Medicare Enrollment Statistics, regression-adjusted synthetic estimates of two levels of functional dependency were produced for all states and--as examples of how the rates can be used to produce additional synthetic estimates--the largest county in each state. We also produced point estimates and standard errors for the national prevalence of functional dependency among the noninstitutionalized population age 65 and over. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
21. An Alternative Approach to Multivariate Response Error Models for Sample Survey Data with Applications to Estimators Involving Subclass Means.
- Author
-
Koch, Gary G.
- Subjects
- *
MATHEMATICAL functions , *MULTIVARIATE analysis , *ERRORS , *MATHEMATICAL statistics , *DECISION making , *DEMOGRAPHIC surveys , *ESTIMATION theory , *REGRESSION analysis - Abstract
Indicator functions are used as the basis for the formulation of a multivariate extension of the response error model developed by the U.S. Bureau of the Census. This approach allows a more complete characterization for general survey designs of the nature of the various components of the total variance of linear sample estimators and is particularly useful with respect to the role of the interaction component. These results are then applied to study the effect of response errors on subclass means, differences between subclass means, and post-stratified means. [ABSTRACT FROM AUTHOR]
- Published
- 1973
- Full Text
- View/download PDF
22. A Linear Models Approach to the Analysis of Survival and Extent of Disease in Multidimensional Contingency Tables.
- Author
-
Koch, Gary G., Johnson, William D., and Tolley, H. Dennis
- Subjects
- *
CONTINGENCY tables , *SURVIVAL analysis (Biometry) , *CHRONICALLY ill , *LEAST squares , *STATISTICAL correlation , *DISTRIBUTION (Probability theory) , *LINEAR statistical models , *ESTIMATION theory - Abstract
Matrix algorithms are presented which generate estimates of t-year survival rates for patients with chronic disease from a categorical data approach. Weighted least squares has been applied to the resulting estimates to fit linear models which, together with large sample theory, provide a straightforward and unified method for testing hypotheses of interest. The sequential use of cross-product and hierarchical structures in a stepwise manner is described in detail as a useful descriptive approach to formulating efficient models. These results can be applied as a basis for "clustering" combinations of clinical findings into groups indicative of stage of disease. [ABSTRACT FROM AUTHOR]
- Published
- 1972
- Full Text
- View/download PDF
23. THE EFFECT OF NON-SAMPLING ERRORS ON MEASURES OF ASSOCIATION IN 2X2 CONTINGENCY TABLES.
- Author
-
Koch, Gary G.
- Subjects
- *
ERROR analysis in mathematics , *STATISTICAL sampling , *STATISTICS , *ANALYSIS of variance , *CONTINGENCY tables , *ERRORS , *DISTRIBUTION (Probability theory) , *APPROXIMATION theory , *PROBABILITY theory - Abstract
The effects of non-sampling errors on measures of association in 2 x 2 contingency tables are evaluated by the application of models due to the U. S. Bureau of the Census. This is achieved by first expressing the appropriate sample estimates in the form of Taylor series approximations involving cell probabilities, and then applying the model in a term by term fashion. In this way, the relative effects of sampling errors and response errors on the variability of an estimated measure of association may be interpreted in terms of a sampling variance component and a response variance component. Finally, some indication is given as to how re-survey information can be used to estimate these quantities. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
24. SOME ASPECTS OF THE STATISTICAL ANALYSIS OF 'SPLIT PLOT' EXPERIMENTS IN COMPLETELY RANDOMIZED LAYOUTS.
- Author
-
Koch, Gary G.
- Subjects
- *
MULTIVARIATE analysis , *STATISTICS , *PARAMETER estimation , *HYPOTHESIS , *NONPARAMETRIC statistics , *MATRICES (Mathematics) , *VECTOR analysis - Abstract
The statistical analysis of completely randomized "split-plot" experiments is discussed from the point of view of the underlying multivariate model. In doing this, first certain well-known aspects of the parametric case are reviewed; however, attention is primarily directed toward the development of appropriate non-parametric procedures. The general structure of "split-plot" experiments involves N randomly chosen subjects to whom treatments have been assigned according to a completely randomized design and from each of whom is obtained an observation vector, the components of which represent the responses of the subject to each one of several conditions. Hence, the data matrix has the appearance of a set of mixed models, each one of which corresponds to a particular treatment. The different conditions correspond to the "split plot" treatments in agricultural experiments while the different treatments correspond to the "whole plot" treatments. Alternatively, such designs may be interpreted simply as multivariate one-way layouts in which the components of the observation vector have been measured in the same units (or on the same scale) and hence are comparable. In such experiments, a number of hypotheses are of interest--the hypothesis of no treatment effects, the hypothesis of no condition effects, the hypothesis of no interaction between treatments and conditions. Various formulations of these hypotheses are discussed under several different combinations of assumptions concerning the joint distribution of the components of the observation vector. In each case considered, appropriate parametric or nonparametric test... [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
25. Risk of Venous Thromboembolism With Tofacitinib Versus Tumor Necrosis Factor Inhibitors in Cardiovascular Risk‐Enriched Rheumatoid Arthritis Patients.
- Author
-
Charles‐Schoeman, Christina, Fleischmann, Roy, Mysler, Eduardo, Greenwald, Maria, Ytterberg, Steven R., Koch, Gary G., Bhatt, Deepak L., Wang, Cunshan, Mikuls, Ted R., Chen, All‐shine, Connell, Carol A., Woolcott, John C., Menon, Sujatha, Chen, Yan, Lee, Kristen, and Szekanecz, Zoltán
- Abstract
Objective Methods Results Conclusion The ORAL Surveillance trial found a dose‐dependent increase in venous thromboembolism (VTE) and pulmonary embolism (PE) events with tofacitinib versus tumor necrosis factor inhibitors (TNFi). We aimed to assess VTE incidence over time and explore risk factors of VTE, including disease activity, in ORAL Surveillance.Patients with rheumatoid arthritis (RA) aged 50 years or older with at least one additional cardiovascular risk factor received tofacitinib 5 or 10 mg twice daily (BID) or TNFi. Post hoc, cumulative probabilities and incidence rates (patients with first events/100 patient‐years) by 6‐month intervals were estimated for adjudicated VTE, deep vein thrombosis, and PE. Cox regression models identified risk factors. Clinical Disease Activity Index leading up to the event was explored in patients with VTE.Cumulative probabilities for VTE and PE were higher with tofacitinib 10 mg BID, but not 5 mg BID, versus TNFi. Incidence rates were consistent across 6‐month intervals within treatments. Across treatments, risk factors for VTE included prior VTE, body mass index greater than or equal to 35 kg/m2, older age, and history of chronic lung disease. At the time of the event, most patients with VTE had active disease as defined by Clinical Disease Activity Index.Incidences of VTE and PE were higher with tofacitinib (10 > 5 mg BID) versus TNFi and were generally consistent over time. Across treatments, VTE risk factors were aligned with previous studies in the general RA population. These data highlight the importance of assessing VTE risk factors, including age, body mass index, and VTE history, when considering initiation of tofacitinib or TNFi in patients with active RA. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. ASA celebrates sesquicentennial.
- Author
-
Koch, Gary G. and Leone, Fred C.
- Subjects
- *
ANNIVERSARIES - Abstract
Reports on the celebration of the 150th anniversary of the American Statistical Association (ASA) in Boston, Massachusetts in August 1989. Attendance figures; Speech by 1989 ASA president Janet L. Norwood; Congratulatory letter from President George Bush; Exhibit of archival documents for ASA and government agencies; Historical poster session; Distribution of sesquicentennial souvenirs.
- Published
- 1990
- Full Text
- View/download PDF
27. Comments on "Sample size formula for a win ratio endpoint" by R. X. Yu and J. Ganju.
- Author
-
Gasparyan, Samvel B., Kowalewski, Elaine K., and Koch, Gary G.
- Subjects
- *
EXPERIMENTAL design , *SAMPLE size (Statistics) - Abstract
The coefficient HT ht is close to 1 in case of HT ht being large and HT ht can still be used as a conservative value to avoid the risk of under-powered trials. Therefore, the main difference between the two Formulas (3) and (4) is how the alternative hypothesis is formulated (in terms of a HT ht or HT ht ). In the case of no ties HT ht and both formulas give similar results. [Extracted from the article]
- Published
- 2022
- Full Text
- View/download PDF
28. Effects of dapagliflozin on prevention of major clinical events and recovery in patients with respiratory failure because of COVID‐19: Design and rationale for the DARE‐19 study.
- Author
-
Kosiborod, Mikhail, Berwanger, Otavio, Koch, Gary G., Martinez, Felipe, Mukhtar, Omar, Verma, Subodh, Chopra, Vijay, Javaheri, Ali, Ambery, Philip, Gasparyan, Samvel B., Buenconsejo, Joan, Sjöström, C. David, Langkilde, Anna Maria, Oscarsson, Jan, and Esterline, Russell
- Subjects
- *
COVID-19 , *SODIUM-glucose cotransporter 2 inhibitors , *RESPIRATORY insufficiency , *DAPAGLIFLOZIN , *CHRONIC kidney failure - Abstract
Aims: Coronavirus disease 2019 (COVID‐19) is caused by a novel severe acute respiratory syndrome coronavirus 2. It can lead to multiorgan failure, including respiratory and cardiovascular decompensation, and kidney injury, with significant associated morbidity and mortality, particularly in patients with underlying metabolic, cardiovascular, respiratory or kidney disease. Dapagliflozin, a sodium‐glucose cotransporter‐2 inhibitor, has shown significant cardio‐ and renoprotective benefits in patients with type 2 diabetes (with and without atherosclerotic cardiovascular disease), heart failure and chronic kidney disease, and may provide similar organ protection in high‐risk patients with COVID‐19. Materials and methods: DARE‐19 (NCT04350593) is an investigator‐initiated, collaborative, international, multicentre, randomized, double‐blind, placebo‐controlled study testing the dual hypotheses that dapagliflozin can reduce the incidence of cardiovascular, kidney and/or respiratory complications or all‐cause mortality, or improve clinical recovery, in adult patients hospitalized with COVID‐19 but not critically ill on admission. Eligible patients will have ≥1 cardiometabolic risk factor for COVID‐19 complications. Patients will be randomized 1:1 to dapagliflozin 10 mg or placebo. Primary efficacy endpoints are time to development of new or worsened organ dysfunction during index hospitalization, or all‐cause mortality, and the hierarchical composite endpoint of change in clinical status through day 30 of treatment. Safety of dapagliflozin in individuals with COVID‐19 will be assessed. Conclusions: DARE‐19 will evaluate whether dapagliflozin can prevent COVID‐19‐related complications and all‐cause mortality, or improve clinical recovery, and assess the safety profile of dapagliflozin in this patient population. Currently, DARE‐19 is the first large randomized controlled trial investigating use of sodium‐glucose cotransporter 2 inhibitors in patients with COVID‐19. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
29. An Application of Multivariate Analysis to Complex Sample Survey Data.
- Author
-
Koch, Gary G. and Lemeshow, Stanley
- Subjects
- *
MULTIVARIATE analysis , *STATISTICAL sampling , *SURVEYS , *ANALYSIS of variance , *INDUSTRIAL applications , *REGRESSION analysis , *ESTIMATES , *RANDOM variables - Abstract
This article adapts a standard method of multivariate analysis to a 3highly complex sampling design utilizing the method of balanced repeated replication for calculating valid and consistent estimates of variance. The example illustrates that by doing univariate tests to compare the mean height (or weight) of six year old white males to the mean height (or weight) of six year old Negro males, no significant differences are found between the two groups. However, the multivariate approach yields a significant result because the directions of the differences between two groups with respect to two positively correlated variables are reversed. [ABSTRACT FROM AUTHOR]
- Published
- 1972
- Full Text
- View/download PDF
30. Methods for clarifying criteria for study continuation at interim analysis.
- Author
-
Wiener, Laura E., Ivanova, Anastasia, and Koch, Gary G.
- Subjects
- *
FALSE positive error , *CLINICAL trials monitoring , *PROOF of concept - Abstract
SUMMARY: In monitoring clinical trials, the question of futility, or whether the data thus far suggest that the results at the final analysis are unlikely to be statistically successful, is regularly of interest over the course of a study. However, the opposite viewpoint of whether the study is sufficiently demonstrating proof of concept (POC) and should continue is a valuable consideration and ultimately should be addressed with high POC power so that a promising study is not prematurely terminated. Conditional power is often used to assess futility, and this article interconnects the ideas of assessing POC for the purpose of study continuation with conditional power, while highlighting the importance of the POC type I error and the POC type II error for study continuation or not at the interim analysis. Methods for analyzing subgroups motivate the interim analyses to maintain high POC power via an adjusted interim POC significance level criterion for study continuation or testing against an inferiority margin. Furthermore, two versions of conditional power based on the assumed effect size or the observed interim effect size are considered. Graphical displays illustrate the relationship of the POC type II error for premature study termination to the POC type I error for study continuation and the associated conditional power criteria. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
31. Methods for Missing Data Handling in Phase III Clinical Trials with Nonnormal Endpoints and Nonnormal Covariates.
- Author
-
Fan, Chunpeng, Wei, Lynn, and Koch, Gary G.
- Subjects
- *
MISSING data (Statistics) , *MULTIPLE imputation (Statistics) , *FALSE positive error , *CLINICAL trials , *ANALYSIS of covariance , *ERROR rates - Abstract
In randomized clinical trials, when the endpoint is the change from baseline at the last scheduled visit, various parametric, semiparametric, and nonparametric methods have been developed to handle the possible missing data due to dropouts. Although the last observation carried forward (LOCF) followed by an analysis of covariance (ANCOVA) model or a rank ANCOVA model and the mixed-effects model for repeated measures (MMRM) have been extensively compared and widely used even with presence of the covariates, they may lead to biased results when the required distributional or missing technique assumptions are not satisfied. Nonparametric missing data handling methods including the mean rank imputation (MRI) method relax the underlying distributional assumption; however, when covariates are present, conditions for it to be valid have been investigated to a very limited extent. This article rigorously derives asymptotic properties of the mean rank imputation method with the presence of a nonnormal covariate. The investigated methods are applied to an illustrative phase III clinical trial. Simulation studies confirmed the better performance of the mean rank imputation method in terms of Type I error rate control and power under certain mild conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
32. Statistical planning in confirmatory clinical trials with multiple treatment groups, multiple visits, and multiple endpoints.
- Author
-
Sun, Hengrui, Snyder, Ellen, and Koch, Gary G.
- Subjects
- *
DRUG development , *CLINICAL drug trials , *DRUG efficacy , *MEDICATION safety , *CLINICAL trials - Abstract
Multiplicity issues can be multidimensional: A confirmatory clinical trial may be designed to have efficacy assessed with two or more primary endpoints, for multiple dose groups, and at several post-baseline visits. Controlling for multiplicity in this situation is challenging because there can be a hierarchy with respect to some but not all measurements. If the higher dose is considered more efficacious, multiplicity approach may evaluate the higher dose with higher priority through a fixed sequential testing framework for dose assessments in combination with a Hochberg approach for endpoints. The lower dose is only assessed when the higher dose has significant results, which reduces the power for detecting signals in the lower dose group. However, in some instances the higher dose may associate with tolerability or safety concerns that preclude regulatory approval. A real confirmatory clinical trial with such challenges is provided as an illustrative example. We discuss closed testing procedures based on multi-way averages of comparisons for this complex multiplicity situation through illustrative case analyses and a simulation study. Such strategies manage the higher dose and the lower dose with equal priority, and they enable evaluation of the multiple endpoints at multiple visits collectively with power being reasonably high. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
33. A model-based conditional power assessment for decision making in randomized controlled trial studies.
- Author
-
Zou, Baiming, Cai, Jianwen, Koch, Gary G., Zhou, Haibo, and Zou, Fei
- Subjects
- *
CHAOS theory , *CLINICAL trials , *COMPUTER simulation , *TYPE 1 diabetes , *MULTIVARIATE analysis , *PROBABILITY theory , *REGRESSION analysis , *RESEARCH funding , *STATISTICS , *SYSTEM analysis , *SAMPLE size (Statistics) , *STATISTICAL models - Abstract
Conditional power based on summary statistic by comparing outcomes (such as the sample mean) directly between 2 groups is a convenient tool for decision making in randomized controlled trial studies. In this paper, we extend the traditional summary statistic-based conditional power with a general model-based assessment strategy, where the test statistic is based on a regression model. Asymptotic relationships between parameter estimates based on the observed interim data and final unobserved data are established, from which we develop an analytic model-based conditional power assessment for both Gaussian and non-Gaussian data. The model-based strategy is not only flexible in handling baseline covariates and more powerful in detecting the treatment effects compared with the conventional method but also more robust in controlling the overall type I error under certain missing data mechanisms. The performance of the proposed method is evaluated by extensive simulation studies and illustrated with an application to a clinical study. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
34. Considerations on Testing Secondary Endpoints in Group Sequential Design.
- Author
-
Li, Xiaoming, Wulfsohn, Michael S., and Koch, Gary G.
- Subjects
- *
ONCOLOGY , *CLINICAL trials , *FALSE positive error - Abstract
In the fixed sample size design, when the null hypothesis for the primary endpoint is rejected at trial completion, secondary endpoints sometimes are tested at the full alpha level in a prespecified hierarchical order, and this strategy strongly controls the overall Type I error rate. However, in a group sequential setting, this hierarchical testing strategy may not control the overall Type I error rate for the secondary endpoints in the strong sense. Thus, when there is one interim analysis, there are proposals for an alpha spending function for testing secondary endpoints. Motivated by a Phase 3 oncology trial, we explored whether there is a less stringent approach to control the Type I error rate for testing one or more secondary endpoints in a prespecified order for the settings where there are more than one interim analysis. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
35. A Basic Demonstration of the [-1,1] Range for the Correlation Coefficient.
- Author
-
Koch, Gary G.
- Subjects
- *
STATISTICAL correlation , *REGRESSION analysis - Abstract
Demonstrates the correlation coefficient that lies in the range [-1,1]. Features of the correlation coefficient; Role of the correlation coefficient; Implications of linear regression on role of the correlation coefficient.
- Published
- 1985
- Full Text
- View/download PDF
36. A USEFUL LEMMA FOR PROVING THE EQUALITY OF TWO MATRICES WITH APPLICATIONS TO LEAST SQUARES TYPE QUADRATIC FORMS.
- Author
-
Koch, Gary G.
- Subjects
- *
MATRICES (Mathematics) , *QUADRATIC forms , *EQUALITY , *LEAST squares , *MATHEMATICAL statistics , *ESTIMATION theory - Abstract
A useful lemma for proving the equality of two matrices is given. The result is then applied to proving the equality of certain quadratic forms arising in least squares models. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
37. Statistical planning to address strongly correlated endpoints for inferential subgroups: An innovative approach for an illustrative clinical trial with complex multiplicity issues.
- Author
-
Sun, Hengrui, Binkowitz, Bruce, and Koch, Gary G
- Subjects
- *
CLINICAL trials , *INFERENTIAL statistics , *SPAN (Electronic computer system) , *CARDIOVASCULAR disease treatment , *SIMULATION methods & models - Abstract
Multiplicity is an important statistical issue that arises in clinical trials when the efficacy of the test treatment is evaluated in multiple ways. The major concern for multiplicity is that uncontrolled multiple assessments lead to inflated family-wise Type I error, and they thereby undermine the integrity of the statistical inferences. Multiplicity comes from different sources, for example, making inferences either on the overall population or some pre-specified sub-populations, while multiple endpoints need to be evaluated for each population. Therefore, a sound statistical strategy that controls the family-wise Type I error rate in a strong sense, without excessive loss of power from over-control, is crucial for the success of the trial. For a recent phase III cardiovascular trial with such complex multiplicity, we illustrate the use of a closed testing strategy that begins with a global test; and subsequent testing only proceeds when the global test is rejected. Also, we discuss a simulation study based on this trial to compare the power of the illustrated closed testing strategy to some well-known alternative approaches. We found that this strategy can comprehensively meet most of the primary objectives of the trial effectively with reasonably high overall power. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
38. Power and sample size calculation for the win odds test: application to an ordinal endpoint in COVID-19 trials.
- Author
-
Gasparyan, Samvel B., Kowalewski, Elaine K., Folkvaljon, Folke, Bengtsson, Olof, Buenconsejo, Joan, Adler, John, and Koch, Gary G.
- Subjects
- *
SAMPLE size (Statistics) , *COVID-19 , *STATISTICAL software , *RANDOM variables , *INDEPENDENT variables - Abstract
The win odds is a distribution-free method of comparing locations of distributions of two independent random variables. Introduced as a method for analyzing hierarchical composite endpoints, it is well suited to be used in the analysis of ordinal scale endpoints in COVID-19 clinical trials. For a single outcome, we provide power and sample size calculation formulas for the win odds test. We also provide an implementation of the win odds analysis method for a single ordinal outcome in a commonly used statistical software to make the win odds analysis fully reproducible. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. Statistical Perspectives on Subgroup Analysis: Testing for Heterogeneity and Evaluating Error Rate for the Complementary Subgroup.
- Author
-
Alosh, Mohamed, Huque, Mohammad F., and Koch, Gary G.
- Subjects
- *
ERROR rates , *HETEROGENEITY , *POPULATION , *CLINICAL trials , *DEMOGRAPHIC surveys , *SUBGROUP analysis (Experimental design) - Abstract
Substantial heterogeneity in treatment effects across subgroups can cause significant findings in the overall population to be driven predominantly by those of a certain subgroup, thus raising concern on whether the treatment should be prescribed for the least benefitted subgroup. Because of its low power, a nonsignificant interaction test can lead to incorrectly prescribing treatment for the overall population. This article investigates the power of the interaction test and its implications. Also, it investigates the probability of prescribing the treatment to a nonbenefitted subgroup on the basis of a nonsignificant interaction test and other recently proposed criteria. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
40. Comparison Between Two Controlled Multiple Imputation Methods for Sensitivity Analyses of Time-to-Event Data With Possibly Informative Censoring.
- Author
-
Lu, Kaifeng, Li, Dayong, and Koch, Gary G.
- Subjects
- *
MULTIPLE imputation (Statistics) , *PLACEBOS , *CLINICAL trials - Abstract
Controlled imputation methods provide general and flexible sensitivity analyses to address nonignorable missing data. For time-to-event data with possibly informative censoring, we compare two popular methods for imputing the censored event time conditional on the time of follow-up discontinuation. One is the delta-adjusted method that specifies that the hazard of having an event for subjects who discontinued before the time point is multiplicatively increased relative to the hazard for subjects who continued beyond the time point. The other is the reference-based method that specifies that the hazard for experimental subjects who discontinued lies between the hazard for experimental subjects who continued and the hazard for the reference control (e.g., placebo) subjects. We consider both piecewise constant and nonparametric baseline hazard functions, Bayesian and frequentist imputations, and Rubin’s and bootstrap variances for the multiple imputation estimator. We show that both the reference-based and delta-adjusted sensitivity analyses control the one-sided Type I error rate (in the direction of a difference favoring the experimental treatment). In addition, when the bootstrap variance is used for inference, the reference-based sensitivity analysis has better power than the delta-adjusted sensitivity analysis for the same underlying treatment effect. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
41. On model selections for repeated measurement data in clinical studies.
- Author
-
Zou, Baiming, Jin, Bo, Koch, Gary G., Zhou, Haibo, Borst, Stephen E., Menon, Sandeep, and Shuster, Jonathan J.
- Abstract
Repeated measurement designs have been widely used in various randomized controlled trials for evaluating long-term intervention efficacies. For some clinical trials, the primary research question is how to compare two treatments at a fixed time, using a t-test. Although simple, robust, and convenient, this type of analysis fails to utilize a large amount of collected information. Alternatively, the mixed-effects model is commonly used for repeated measurement data. It models all available data jointly and allows explicit assessment of the overall treatment effects across the entire time spectrum. In this paper, we propose an analytic strategy for longitudinal clinical trial data where the mixed-effects model is coupled with a model selection scheme. The proposed test statistics not only make full use of all available data but also utilize the information from the optimal model deemed for the data. The performance of the proposed method under various setups, including different data missing mechanisms, is evaluated via extensive Monte Carlo simulations. Our numerical results demonstrate that the proposed analytic procedure is more powerful than the t-test when the primary interest is to test for the treatment effect at the last time point. Simulations also reveal that the proposed method outperforms the usual mixed-effects model for testing the overall treatment effects across time. In addition, the proposed framework is more robust and flexible in dealing with missing data compared with several competing methods. The utility of the proposed method is demonstrated by analyzing a clinical trial on the cognitive effect of testosterone in geriatric men with low baseline testosterone levels. Copyright © 2015 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
42. Cardiovascular and Cancer Risk with Tofacitinib in Rheumatoid Arthritis.
- Author
-
Ytterberg, Steven R., Bhatt, Deepak L., Mikuls, Ted R., Koch, Gary G., Fleischmann, Roy, Rivas, Jose L., Germino, Rebecca, Menon, Sujatha, Yanhui Sun, Wang, Cunshan, Shapiro, Andrea B., Kanik, Keith S., and Connell, Carol A.
- Abstract
BACKGROUND Increases in lipid levels and cancers with tofacitinib prompted a trial of major adverse cardiovascular events (MACE) and cancers in patients with rheumatoid arthritis receiving tofacitinib as compared with a tumor necrosis factor (TNF) inhibitor. METHODS We conducted a randomized, open-label, noninferiority, postauthorization, safety end-point trial involving patients with active rheumatoid arthritis despite methotrexate treatment who were 50 years of age or older and had at least one additional cardiovascular risk factor. Patients were randomly assigned in a 1:1:1 ratio to receive tofacitinib at a dose of 5 mg or 10 mg twice daily or a TNF inhibitor. The coprimary end points were adjudicated MACE and cancers, excluding non-melanoma skin cancer. The noninferiority of tofacitinib would be shown if the upper boundary ofthe two-sided 95% confidence interval for the hazard ratio was less than 1.8 for the combined tofacitinib doses as compared with a TNF inhibitor. RESULTS A total of 1455 patients received tofacitinib at a dose of 5 mg twice daily, 1456 received tofacitinib at a dose of 10 mg twice daily, and 1451 received a TNF inhibitor. During a median follow-up of 4.0 years, the incidences of MACE and cancer were higher with the combined tofacitinib doses (3.4% [98 patients] and 4.2% [122 patients], respectively) than with a TNF inhibitor (2. 5% [37 patients] and 2.9% [42 patients]). The hazard ratios were 1.33 (95% confidence interval [CI], 0.91 to 1.94) for MACE and 1.48 (95% CI, 1.04 to 2.09) for cancers; the noninferiority of tofacitinib was not shown. The incidences of adjudicated opportunistic infections (including herpes zoster and tuberculosis), all herpes zoster (nonserious and serious), and adjudicated nonmelanoma skin cancer were higher with tofacitinib than with a TNF inhibitor. Efficacy was similar in all three groups, with improvements from month 2 that were sustained through trial completion. CONCLUSIONS In this trial comparing the combined tofacitinib doses with a TNF inhibitor in a cardiovascular risk-enriched population, risks of MACE and cancers were higher with tofacitinib and did not meet noninferiority criteria. Several adverse events were more common with tofacitinib. (Funded by Pfizer; ORAL Surveillance ClinicalTrials .gov number, NCT0209246Z) [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. A robust method for comparing two treatments in a confirmatory clinical trial via multivariate time-to-event methods that jointly incorporate information from longitudinal and time-to-event data.
- Author
-
Saville, Benjamin R., Herring, Amy H., and Koch, Gary G.
- Abstract
We consider regulatory clinical trials that require a prespecified method for the comparison of two treatments for chronic diseases (e.g. Chronic Obstructive Pulmonary Disease) in which patients suffer deterioration in a longitudinal process until death occurs. We define a composite endpoint structure that encompasses both the longitudinal data for deterioration and the time-to-event data for death, and use multivariate time-to-event methods to assess treatment differences on both data structures simultaneously, without a need for parametric assumptions or modeling. Our method is straightforward to implement, and simulations show that the method has robust power in situations in which incomplete data could lead to lower than expected power for either the longitudinal or survival data. We illustrate the method on data from a study of chronic lung disease. Copyright © 2009 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
44. Randomization-based nonparametric methods for the analysis of multicentre trials.
- Author
-
LaVange, Lisa M., Durham, Todd A., and Koch, Gary G.
- Subjects
- *
CLINICAL trials , *MEDICAL research , *REGRESSION analysis , *CLINICAL medicine , *DRUG development , *DRUG metabolism , *EYE diseases - Abstract
Multicentre trials offer several advantages over single centre trials in clinical research, including the ability to recruit patients at a faster rate over the course of the study, increased generalizability through the use of a broader patient population, and the ability to shed light on the replication of findings at multiple centres in a single study. A nonparametric approach to the analysis of multicentre trial data provides a convenient way for addressing the role of centres as well as baseline covariables during data analysis. With the use of randomization-based nonparametric methods, the strategy for evaluating the null hypothesis of no treatment effect can be prespecified during study planning without requiring a specific structure for the relationship of response criteria (or endpoints) to centres, covariables, or potential interaction terms. Further, the basis of inference for the application of these methods is the randomization mechanism, and the population to which inference can be directly made is the study population itself. No assumptions about underlying distributions, data structures, likelihood functions, or samples from super populations of inference are required. A three-step approach is proposed for handling centres via randomization-based nonparametric methods. In Step 1, a test of overall treatment effect is carried out using data from all centres simultaneously, without any assumption about treatment by centre interaction. In Step 2, the question of treatment by centre interaction is addressed, usually through the use of parametric multiple regression methods. In cases with suggestion of such interaction, Step 3 is conducted to evaluate different weighting schemes in forming pairwise treatment comparisons averaged across centres to assess the robustness of treatment effects observed in Step 1. An attractive inferential feature of this three-step approach is that the Type I error for the test of treatment effect is controlled by requiring statistical significance at each step to proceed to the next step. Extended Mantel-Haenszel methods with stratification adjustment for centre can be used to provide a nonparametric assessment of treatment effect. When adjustment for other covariates, such as baseline values, is desired, the more recent nonparametric analysis of covariance methods are available. Both methods are easy to use, require no assumptions beyond that of a valid randomization mechanism, and can be applied in a similar manner to dichotomous, ordinal, failure time, or continuous response criteria (endpoints). The methods are illustrated using data from a confirmatory clinical trial of a therapeutic agent for the treatment of dry eye disease. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
45. A non-parametric procedure for evaluating treatment effect in the meta-analysis of survival data.
- Author
-
Moodie, Patricia F., Nelson, Norma A., and Koch, Gary G.
- Abstract
This paper addresses the problem of combining information from independent clinical trials which compare survival distributions of two treatment groups. Current meta-analytic methods which take censoring into account are often not feasible for meta-analyses which synthesize summarized results in published (or unpublished) references, as these methods require information usually not reported. The paper presents methodology which uses the log(-log) survival function difference, (i.e. log(-log S2( t))-log(-log S1( t)), as the contrast index to represent the multiplicative treatment effect on survival in independent trials. This article shows by the second mean value theorem for integrals that this contrast index, denoted as θ, is interpretable as a weighted average on a natural logarithmic scale of hazard ratios within the interval [0, t] in a trial. When the within-trial proportional hazards assumption is true, θ is the logarithm of the proportionality constant for the common hazard ratio for the interval considered within the trial. In this situation, an important advantage of using θ as a contrast index in the proposed methodology is that the estimation of θ is not affected by length of follow-up time. Other commonly used indices such as the odds ratio, risk ratio and risk differences do not have this invariance property under the proportional hazard model, since their estimation may be affected by length of follow-up time as a technical artefact. Thus, the proposed methodology obviates problems which often occur in survival meta-analysis because trials do not report survival at the same length of follow-up time. Even when the within-trial proportional hazards assumption is not realistic, the proposed methodology has the capability of testing a global null hypothesis of no multiplicative treatment effect on the survival distributions of two groups for all studies. A discussion of weighting schemes for meta-analysis is provided, in particular, a weighting scheme based on effective sample sizes is suggested for the meta-analysis of time-to-event data which involves censoring. A medical example illustrating the methodology is given. A simulation investigation suggested that the methodology performs well in the presence of moderate censoring. Copyright © 2004 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
46. On kernel machine learning for propensity score estimation under complex confounding structures.
- Author
-
Zou, Baiming, Mi, Xinlei, Tighe, Patrick J., Koch, Gary G., and Zou, Fei
- Subjects
- *
MACHINE learning , *ELECTRONIC health records , *PHYSICIANS , *ALGORITHMS , *POSTOPERATIVE pain - Abstract
Post marketing data offer rich information and cost‐effective resources for physicians and policy‐makers to address some critical scientific questions in clinical practice. However, the complex confounding structures (e.g., nonlinear and nonadditive interactions) embedded in these observational data often pose major analytical challenges for proper analysis to draw valid conclusions. Furthermore, often made available as electronic health records (EHRs), these data are usually massive with hundreds of thousands observational records, which introduce additional computational challenges. In this paper, for comparative effectiveness analysis, we propose a statistically robust yet computationally efficient propensity score (PS) approach to adjust for the complex confounding structures. Specifically, we propose a kernel‐based machine learning method for flexibly and robustly PS modeling to obtain valid PS estimation from observational data with complex confounding structures. The estimated propensity score is then used in the second stage analysis to obtain the consistent average treatment effect estimate. An empirical variance estimator based on the bootstrap is adopted. A split‐and‐merge algorithm is further developed to reduce the computational workload of the proposed method for big data, and to obtain a valid variance estimator of the average treatment effect estimate as a by‐product. As shown by extensive numerical studies and an application to postoperative pain EHR data comparative effectiveness analysis, the proposed approach consistently outperforms other competing methods, demonstrating its practical utility. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
47. Adjusted win ratio with stratification: Calculation methods and interpretation.
- Author
-
Gasparyan, Samvel B, Folkvaljon, Folke, Bengtsson, Olof, Buenconsejo, Joan, and Koch, Gary G
- Subjects
- *
ANALYSIS of covariance , *RANDOM variables , *PETROLEUM transportation , *TREATMENT effectiveness , *MISSING data (Statistics) - Abstract
The win ratio is a general method of comparing locations of distributions of two independent, ordinal random variables, and it can be estimated without distributional assumptions. In this paper we provide a unified theory of win ratio estimation in the presence of stratification and adjustment by a numeric variable. Building step by step on the estimate of the crude win ratio we compare corresponding tests with well known non-parametric tests of group difference (Wilcoxon rank-sum test, Fligner–Policello test, van Elteren test, test based on the regression on ranks, and the rank analysis of covariance test). We show that the win ratio gives an interpretable treatment effect measure with corresponding test to detect treatment effect difference under minimal assumptions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
48. Is acute heart failure a distinctive disorder? An analysis from BIOSTAT‐CHF.
- Author
-
Davison, Beth A., Senger, Stefanie, Sama, Iziah E., Koch, Gary G., Mebazaa, Alexandre, Dickstein, Kenneth, Samani, Nilesh J., Metra, Marco, Anker, Stefan D., Cleland, John G., Ng, Leong L., Mordi, Ify R., Zannad, Faiez, Filippatos, Gerasimos S., Hillege, Hans L., Ponikowski, Piotr, Veldhuisen, Dirk J., Lang, Chim C., Meer, Peter, and Núñez, Julio
- Subjects
- *
HEART failure , *SYMPTOMS , *PROTEOMICS , *RETROSPECTIVE studies , *BIOMARKERS - Abstract
Aims: This retrospective analysis sought to identify markers that might distinguish between acute heart failure (HF) and worsening HF in chronic outpatients. Methods and results: The BIOSTAT‐CHF index cohort included 2516 patients with new or worsening HF symptoms: 1694 enrolled as inpatients (acute HF) and 822 as outpatients (worsening HF in chronic outpatients). A validation cohort included 935 inpatients and 803 outpatients. Multivariable models were developed in the index cohort using clinical characteristics, routine laboratory values, and proteomics data to examine which factors predict adverse outcomes in both conditions and to determine which factors differ between acute HF and worsening HF in chronic outpatients, validated in the validation cohort. Patients with acute HF had substantially higher morbidity and mortality (6‐month mortality was 12.3% for acute HF and 4.7% for worsening HF in chronic outpatients). Multivariable models predicting 180‐day mortality and 180‐day HF readmission differed substantially between acute HF and worsening HF in chronic outpatients. Carbohydrate antigen 125 was the strongest single biomarker to distinguish acute HF from worsening HF in chronic outpatients, but only yielded a C‐index of 0.71. A model including multiple biomarkers and clinical variables achieved a high degree of discrimination with a C‐index of 0.913 in the index cohort and 0.901 in the validation cohort. Conclusions: This study identifies different characteristics and predictors of outcome in acute HF patients as compared to outpatients with chronic HF developing worsening HF. The markers identified may be useful in better diagnosing acute HF and may become targets for treatment development. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
49. An Analysis of Physician Visit Data From a complex Sample Survey.
- Author
-
Freeman Jr., Daniel H., Freeman, Jean L., Koch, Gary G., and Brock, Dwight B.
- Subjects
- *
PHYSICIAN services utilization , *OUTPATIENT medical care use , *MEDICAL care use , *PHYSICIANS , *GENERAL practitioners , *PUBLIC health , *LEAST squares , *COMPARATIVE studies , *STATISTICAL services - Abstract
A generalization of ordinary least squares methods is used in the analysis of physician visit data from a complex sample survey. The emphasis, in this paper, is on the valid substantive inferences to be drawn from an analysis of this type of data. The procedure is found to be useful in two ways. First, the results of a comparative sampling study are reported. (See Appendix) Second, the procedure is used to remove statistically non-significant variation from the data in order to generate fitted or smoothed estimates on which the substantive analyst may focus his attention. These fitted values are then examined for implications to physician service utilization on a national basis. It is concluded that age is an important variable while the effect of sex and race depends on age. Similarly, residence and income are important but the effect of education depends on the level of income. [ABSTRACT FROM AUTHOR]
- Published
- 1976
- Full Text
- View/download PDF
50. Analytic approaches to longitudinal caries data in adults.
- Author
-
Beck, James D., Lawrence, Herenia P., and Koch, Gary G.
- Subjects
- *
DENTAL caries , *ADULTS , *EPIDEMIOLOGY , *TOOTH loss , *MEDICAL personnel , *DENTISTRY - Abstract
The objective of this paper is to consider current methods for analyzing longitudinal caries data in adults. To illustrate these methods, we used data from the Piedmont dental study, a prospective investigation of the oral health of older adults. Longitudinal dental data sets comprise repeated observations of an outcome (often clustered within randomly selected primary sampling units). and a set, of covariates for each of many subjects, in whom clustering can occur as a result of measuring teeth, or surfaces, within people. One objective of statistical analysis is to predict the outcome variable as a function of the covariates, while accounting for the correlation among the repeated observations for a given subject and the effect of clustering within subjects, as well as between subjects within primary sampling units, such as communities, schools, hospitals, or other such units. We considered two statistical approaches: generalized estimating equations and survey regression models. We also examined the impact of varying diagnostic criteria for caries estimation between epidemiologists and clinicians. One approach is to perform the usual time[subx] exam score minus time0 score analysis for the baseline and final examinations, while an alternative is to analyze trends among interim examinations. Finally, because caries studies in which the onset of the disease is the endpoint face the problem of censoring due to subject attrition and/or tooth loss, we recommend the incidence density (time-to-event) analytic strategy to address this problem. This approach was found to be most suitable for longitudinal studies of older adults since it accounts for the time each surface remains at risk for the event of interest, making use of interim exam data until the moment the subject and/or the tooth are no longer available for examination. [ABSTRACT FROM AUTHOR]
- Published
- 1997
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.