15 results on '"Polley, Mei-Yin C"'
Search Results
2. Two-Stage Adaptive Design for Prognostic Biomarker Signatures With a Survival Endpoint.
- Author
-
Dai, Biyue and Polley, Mei-Yin C.
- Subjects
- *
TRIPLE-negative breast cancer , *SURVIVAL analysis (Biometry) , *NULL hypothesis , *STATISTICAL significance - Abstract
Cancer biomarker discoveries typically involve using patient specimens. In practice, there is often strong desire to preserve high quality biospecimens for studies that are most likely to yield useful information. Previously, we proposed a two-stage adaptive design for binary endpoints which terminates the biomarker study in a futility interim if the model performance is unsatisfactory. In this work, we extend the two-stage design framework to accommodate time-to-event endpoints. The first stage of the procedure involves testing whether the measure of discrimination for survival models (C-index) exceeds a prespecified threshold. We describe the computation of cross-validated C-index and evaluation of the statistical significance using resampling techniques. The second stage involves an independent model validation. Our simulation studies show that under the null hypothesis, the proposed design maintains Type I error at the nominal level and has high probabilities of terminating the study early. Under the alternative hypothesis, power of the design is a function of the true event proportion, the sample size, and the targeted improvement in the discriminant measure. We apply the method to design of a prognostic biomarker study in patients with triple-negative breast cancer. Some practical aspects of the proposed method are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Early-Phase Platform Trials: A New Paradigm for Dose Finding and Treatment Screening in the Era of Precision Oncology.
- Author
-
Polley, Mei-Yin C. and Cheung, Ying Kuen
- Subjects
- *
NEW trials , *ONCOLOGY , *PATIENT selection , *INDIVIDUALIZED medicine , *EXPERIMENTAL design - Abstract
Applications in early-phase cancer trials have motivated the development of many statistical designs since the late 1980s, including dose-finding methods, futility screening, treatment selection, and early stopping rules. These methods are often proposed to address the conventional cytotoxic therapeutics for neoplastic diseases and cancer. Recent advances in precision medicine have motivated novel trial designs, most notably the idea of master protocol (eg, platform trial, basket trial, umbrella trial, N-of-1 trial), for the evaluation of molecularly targeted cancer therapies. In this article, we review the concepts and methodology of early-phase cancer trial designs with a focus on dose finding and treatment screening and put these methods in the context of platform trials of molecularly targeted cancer therapies. Because most cancer trial designs have been developed for cytotoxic agents, we will discuss how these time-tested design principles hold relevance for targeted cancer therapies, and we will delineate how a master protocol may serve as an efficient platform for safety and efficacy evaluations of novel targeted therapies. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
4. Phase III Precision Medicine Clinical Trial Designs That Integrate Treatment and Biomarker Evaluation.
- Author
-
Polley, Mei-Yin C., Korn, Edward L., and Freidlin, Boris
- Subjects
- *
CLINICAL medicine , *INDIVIDUALIZED medicine , *CLINICAL trials , *DRUG development - Abstract
Recent advances in biotechnology and cancer genomics have afforded enormous opportunities for development of more effective anticancer therapies. A key thrust of this modern drug development paradigm is successful identification of predictive biomarkers that can distinguish patients who might be sensitive to new targeted therapies. To respond to this challenge, a number of phase III cancer trial designs integrating biomarker-based objectives have been proposed and implemented in oncology drug development. In this article, we provide an updated review of commonly used biomarker-based randomized clinical trial designs, with a particular focus on design efficiency. When the efficacy of a new therapy may be limited to a biomarker-defined subgroup, the choice of an appropriate randomized clinical trial design should be guided by the strength of the biomarker's credentials. If compelling evidence indicates that a targeted therapy is beneficial only in a particular biomarker-defined subgroup, an enrichment design should be used. If there is strong evidence that the treatment is likely to be more beneficial in the biomarker-positive patients but a meaningful benefit is also possible in the biomarker-negative patients, then a properly powered biomarker-stratified design (eg, a subgroup-specific or Marker Sequential Test strategy) would provide the most rigorous determination of the sensitive populations. If the evidence supporting the predictive value of the biomarker is weak and the treatment is expected to work in the overall population, then a fallback design could be used. Careful selection of an appropriate phase III design strategy that integrates evaluation of a new anticancer therapy and its companion diagnostic is critical to the success of precision medicine in oncology. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
5. Early-Phase Platform Trials: A New Paradigm for Dose Finding and Treatment Screening in the Era of Precision Oncology.
- Author
-
Polley, Mei-Yin C. and Cheung, Ying Kuen
- Subjects
- *
NEW trials , *ONCOLOGY , *PATIENT selection , *INDIVIDUALIZED medicine , *EXPERIMENTAL design - Abstract
Applications in early-phase cancer trials have motivated the development of many statistical designs since the late 1980s, including dose-finding methods, futility screening, treatment selection, and early stopping rules. These methods are often proposed to address the conventional cytotoxic therapeutics for neoplastic diseases and cancer. Recent advances in precision medicine have motivated novel trial designs, most notably the idea of master protocol (eg, platform trial, basket trial, umbrella trial, N-of-1 trial), for the evaluation of molecularly targeted cancer therapies. In this article, we review the concepts and methodology of early-phase cancer trial designs with a focus on dose finding and treatment screening and put these methods in the context of platform trials of molecularly targeted cancer therapies. Because most cancer trial designs have been developed for cytotoxic agents, we will discuss how these time-tested design principles hold relevance for targeted cancer therapies, and we will delineate how a master protocol may serve as an efficient platform for safety and efficacy evaluations of novel targeted therapies. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
6. Phase III Precision Medicine Clinical Trial Designs That Integrate Treatment and Biomarker Evaluation.
- Author
-
Polley, Mei-Yin C., Korn, Edward L., and Freidlin, Boris
- Subjects
- *
CLINICAL medicine , *INDIVIDUALIZED medicine , *CLINICAL trials , *DRUG development - Abstract
Recent advances in biotechnology and cancer genomics have afforded enormous opportunities for development of more effective anticancer therapies. A key thrust of this modern drug development paradigm is successful identification of predictive biomarkers that can distinguish patients who might be sensitive to new targeted therapies. To respond to this challenge, a number of phase III cancer trial designs integrating biomarker-based objectives have been proposed and implemented in oncology drug development. In this article, we provide an updated review of commonly used biomarker-based randomized clinical trial designs, with a particular focus on design efficiency. When the efficacy of a new therapy may be limited to a biomarker-defined subgroup, the choice of an appropriate randomized clinical trial design should be guided by the strength of the biomarker's credentials. If compelling evidence indicates that a targeted therapy is beneficial only in a particular biomarker-defined subgroup, an enrichment design should be used. If there is strong evidence that the treatment is likely to be more beneficial in the biomarker-positive patients but a meaningful benefit is also possible in the biomarker-negative patients, then a properly powered biomarker-stratified design (eg, a subgroup-specific or Marker Sequential Test strategy) would provide the most rigorous determination of the sensitive populations. If the evidence supporting the predictive value of the biomarker is weak and the treatment is expected to work in the overall population, then a fallback design could be used. Careful selection of an appropriate phase III design strategy that integrates evaluation of a new anticancer therapy and its companion diagnostic is critical to the success of precision medicine in oncology. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
7. On the Quest of Risk Stratification in HER2-Positive Breast Cancer.
- Author
-
Polley, Mei-Yin C
- Published
- 2022
- Full Text
- View/download PDF
8. Two-stage adaptive cutoff design for building and validating a prognostic biomarker signature.
- Author
-
Polley, Mei‐Yin C., Polley, Eric C., Huang, Erich P., Freidlin, Boris, and Simon, Richard
- Abstract
Cancer biomarkers are frequently evaluated using archived specimens collected from previously conducted therapeutic trials. Routine collection and banking of high quality specimens is an expensive and time-consuming process. Therefore, care should be taken to preserve these precious resources. Here, we propose a novel two-stage adaptive cutoff design that affords the possibility to stop the biomarker study early if an evaluation of the model performance is unsatisfactory at an early stage, thereby allowing one to preserve the remaining specimens for future research. In addition, our design integrates important elements necessary to meet statistical rigor and practical demands for developing and validating a prognostic biomarker signature, including maintaining strict separation between the datasets used to build and evaluate the model and producing a locked-down signature to facilitate future validation. We conduct simulation studies to evaluate the operating characteristics of the proposed design. We show that under the null hypothesis when the model performance is deemed undesirable, the proposed design maintains type I error at the nominal level, has high probabilities of terminating the study early, and results in substantial savings in specimens. Under the alternative hypothesis, power is generally high when the total sample size and the targeted degree of improvement in prediction accuracy are reasonably large. We illustrate the use of the procedure with a dataset in patients with diffuse large-B-cell lymphoma. The practical aspects of the proposed designs are discussed. Published 2014. This article is a U.S. Government work and is in the public domain in the USA. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
9. Comparison of confidence interval methods for an intra-class correlation coefficient (ICC).
- Author
-
Ionan, Alexei C., Polley, Mei-Yin C., McShane, Lisa M., and Dobbin, Kevin K.
- Subjects
- *
CONFIDENCE intervals , *STATISTICAL correlation , *BAYESIAN analysis , *INTERVAL analysis , *RANDOM effects model , *MULTILEVEL models - Abstract
Background The intraclass correlation coefficient (ICC) is widely used in biomedical research to assess the reproducibility of measurements between raters, labs, technicians, or devices. For example, in an inter-rater reliability study, a high ICC value means that noise variability (between-raters and within-raters) is small relative to variability from patient to patient. A confidence interval or Bayesian credible interval for the ICC is a commonly reported summary. Such intervals can be constructed employing either frequentist or Bayesian methodologies. Methods This study examines the performance of three different methods for constructing an interval in a two-way, crossed, random effects model without interaction: the Generalized Confidence Interval method (GCI), the Modified Large Sample method (MLS), and a Bayesian method based on a noninformative prior distribution (NIB). Guidance is provided on interval construction method selection based on study design, sample size, and normality of the data. We compare the coverage probabilities and widths of the different interval methods. Results We show that, for the two-way, crossed, random effects model without interaction, care is needed in interval method selection because the interval estimates do not always have properties that the user expects. While different methods generally perform well when there are a large number of levels of each factor, large differences between the methods emerge when the number of one or more factors is limited. In addition, all methods are shown to lack robustness to certain hard-to-detect violations of normality when the sample size is limited. Conclusions Decision rules and software programs for interval construction are provided for practical implementation in the two-way, crossed, random effects model without interaction. All interval methods perform similarly when the data are normal and there are sufficient numbers of levels of each factor. The MLS and GCI methods outperform the NIB when one of the factors has a limited number of levels and the data are normally distributed or nearly normally distributed. None of the methods work well if the number of levels of a factor are limited and data are markedly non-normal. The software programs are implemented in the popular R language. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
10. An International Ki67 Reproducibility Study.
- Author
-
Polley, Mei-Yin C., Leung, Samuel C. Y., McShane, Lisa M., Gao, Dongxia, Hugh, Judith C., Mastropasqua, Mauro G., Viale, Giuseppe, Zabaglo, Lila A., Penault-Llorca, Frédérique, Bartlett, John M.S., Gown, Allen M., Symmans, W. Fraser, Piper, Tammy, Mehl, Erika, Enos, Rebecca A., Hayes, Daniel F., Dowsett, Mitch, and Nielsen, Torsten O.
- Subjects
- *
TUMOR markers , *BREAST cancer research , *CANCER cells , *IMMUNOHISTOCHEMISTRY , *CANCER invasiveness - Abstract
Background In breast cancer, immunohistochemical assessment of proliferation using the marker Ki67 has potential use in both research and clinical management. However, lack of consistency across laboratories has limited Ki67’s value. A working group was assembled to devise a strategy to harmonize Ki67 analysis and increase scoring concordance. Toward that goal, we conducted a Ki67 reproducibility study. Methods Eight laboratories received 100 breast cancer cases arranged into 1-mm core tissue microarrays—one set stained by the participating laboratory and one set stained by the central laboratory, both using antibody MIB-1. Each laboratory scored Ki67 as percentage of positively stained invasive tumor cells using its own method. Six laboratories repeated scoring of 50 locally stained cases on 3 different days. Sources of variation were analyzed using random effects models with log2-transformed measurements. Reproducibility was quantified by intraclass correlation coefficient (ICC), and the approximate two-sided 95% confidence intervals (CIs) for the true intraclass correlation coefficients in these experiments were provided. Results Intralaboratory reproducibility was high (ICC = 0.94; 95% CI = 0.93 to 0.97). Interlaboratory reproducibility was only moderate (central staining: ICC = 0.71, 95% CI = 0.47 to 0.78; local staining: ICC = 0.59, 95% CI = 0.37 to 0.68). Geometric mean of Ki67 values for each laboratory across the 100 cases ranged 7.1% to 23.9% with central staining and 6.1% to 30.1% with local staining. Factors contributing to interlaboratory discordance included tumor region selection, counting method, and subjective assessment of staining positivity. Formal counting methods gave more consistent results than visual estimation. Conclusions Substantial variability in Ki67 scoring was observed among some of the world’s most experienced laboratories. Ki67 values and cutoffs for clinical decision-making cannot be transferred between laboratories without standardizing scoring methodology because analytical validity is limited. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
11. Statistical and Practical Considerations for Clinical Evaluation of Predictive Biomarkers.
- Author
-
Polley, Mei-Yin C., Freidlin, Boris, Korn, Edward L., Conley, Barbara A., Abrams, Jeffrey S., and McShane, Lisa M.
- Subjects
- *
CANCER treatment , *CANCER patients , *BIOMARKERS , *CLINICAL trials , *QUANTITATIVE research - Abstract
Predictive biomarkers to guide therapy for cancer patients are a cornerstone of precision medicine. Discussed herein are considerations regarding the design and interpretation of such predictive biomarker studies. These considerations are important for both planning and interpreting prospective studies and for using specimens collected from completed randomized clinical trials. Specific issues addressed are differentiation between qualitative and quantitative predictive effects, challenges due to sample size requirements for predictive biomarker assessment, and consideration of additional factors relevant to clinical utility assessment, such as toxicity and cost of new therapies as well as costs and potential morbidities associated with routine use of biomarker-based tests. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
12. Practical modifications to the time-to-event continual reassessment method for phase I cancer trials with fast patient accrual and late-onset toxicities.
- Author
-
Polley, Mei-Yin C.
- Published
- 2011
- Full Text
- View/download PDF
13. Leveraging external data in the design and analysis of clinical trials in neuro-oncology.
- Author
-
Rahman, Rifaquat, Ventz, Steffen, McDunn, Jon, Louv, Bill, Reyes-Rivera, Irmarie, Polley, Mei-Yin C, Merchant, Fahar, Abrey, Lauren E, Allen, Joshua E, Aguilar, Laura K, Aguilar-Cordova, Estuardo, Arons, David, Tanner, Kirk, Bagley, Stephen, Khasraw, Mustafa, Cloughesy, Timothy, Wen, Patrick Y, Alexander, Brian M, and Trippa, Lorenzo
- Subjects
- *
EXPERIMENTAL design , *DATA analysis , *INFORMATION sharing , *OPTIMAL stopping (Mathematical statistics) , *RESEARCH institutes - Abstract
Integration of external control data, with patient-level information, in clinical trials has the potential to accelerate the development of new treatments in neuro-oncology by contextualising single-arm studies and improving decision making (eg, early stopping decisions). Based on a series of presentations at the 2020 Clinical Trials Think Tank hosted by the Society of Neuro-Oncology, we provide an overview on the use of external control data representative of the standard of care in the design and analysis of clinical trials. High-quality patient-level records, rigorous methods, and validation analyses are necessary to effectively leverage external data. We review study designs, statistical methods, risks, and potential distortions in using external data from completed trials and real-world data, as well as data sources, data sharing models, ongoing work, and applications in glioblastoma. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. Criteria for the use of omics-based predictors in clinical trials.
- Author
-
McShane, Lisa M., Cavenagh, Margaret M., Lively, Tracy G., Eberhard, David A., Bigbee, William L., Williams, P. Mickey, Mesirov, Jill P., Polley, Mei-Yin C., Kim, Kelly Y., Tricoli, James V., Taylor, Jeremy M. G., Shuman, Deborah J., Simon, Richard M., Doroshow, James H., and Conley, Barbara A.
- Subjects
- *
CLINICAL trials , *MATHEMATICAL models , *SCIENTISTS , *MEDICAL ethics - Abstract
The US National Cancer Institute (NCI), in collaboration with scientists representing multiple areas of expertise relevant to 'omics'-based test development, has developed a checklist of criteria that can be used to determine the readiness of omics-based tests for guiding patient care in clinical trials. The checklist criteria cover issues relating to specimens, assays, mathematical modelling, clinical trial design, and ethical, legal and regulatory aspects. Funding bodies and journals are encouraged to consider the checklist, which they may find useful for assessing study quality and evidence strength. The checklist will be used to evaluate proposals for NCI-sponsored clinical trials in which omics tests will be used to guide therapy. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
15. Criteria for the use of omics-based predictors in clinical trials: explanation and elaboration.
- Author
-
McShane, Lisa M., Cavenagh, Margaret M., Lively, Tracy G., Eberhard, David A., Bigbee, William L., Williams, P. Mickey, Mesirov, Jill P., Polley, Mei-Yin C., Kim, Kelly Y., Tricoli, James V., Taylor, Jeremy M. G., Shuman, Deborah J., Simon, Richard M., Doroshow, James H., and Conley, Barbara A.
- Abstract
High-throughput ‘omics’ technologies that generate molecular profiles for biospecimens have been extensively used in preclinical studies to reveal molecular subtypes and elucidate the biological mechanisms of disease, and in retrospective studies on clinical specimens to develop mathematical models to predict clinical endpoints. Nevertheless, the translation of these technologies into clinical tests that are useful for guiding management decisions for patients has been relatively slow. It can be difficult to determine when the body of evidence for an omics-based test is sufficiently comprehensive and reliable to support claims that it is ready for clinical use, or even that it is ready for definitive evaluation in a clinical trial in which it may be used to direct patient therapy. Reasons for this difficulty include the exploratory and retrospective nature of many of these studies, the complexity of these assays and their application to clinical specimens, and the many potential pitfalls inherent in the development of mathematical predictor models from the very high-dimensional data generated by these omics technologies. Here we present a checklist of criteria to consider when evaluating the body of evidence supporting the clinical use of a predictor to guide patient therapy. Included are issues pertaining to specimen and assay requirements, the soundness of the process for developing predictor models, expectations regarding clinical study design and conduct, and attention to regulatory, ethical, and legal issues. The proposed checklist should serve as a useful guide to investigators preparing proposals for studies involving the use of omics-based tests. The US National Cancer Institute plans to refer to these guidelines for review of proposals for studies involving omics tests, and it is hoped that other sponsors will adopt the checklist as well. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.