11 results on '"Janice M. McCarthy"'
Search Results
2. Efficient estimation of grouped survival models.
- Author
-
Zhiguo Li, Jiaxing Lin, Alexander B. Sibley, Tracy Truong, Katherina C. Chua, Yu Jiang, Janice M. McCarthy, Deanna L. Kroetz, Andrew S. Allen, and Kouros Owzar
- Published
- 2019
- Full Text
- View/download PDF
3. Parameter estimation and identifiability analysis for a bivalent analyte model of monoclonal antibody-antigen binding
- Author
-
Kyle Nguyen, Kan Li, Kevin Flores, Georgia D. Tomaras, S. Moses Dennison, and Janice M. McCarthy
- Abstract
1AbstractDiscovery research for therapeutic antibodies and vaccine development requires an in-depth understanding of antibody-antigen interactions. Label-free techniques such as Surface Plasmon Resonance (SPR) enable the characterization of biomolecular interactions through kinetics measurements, typically by binding antigens in solution to monoclonal antibodies immobilized on a SPR chip. 1:1 Langmuir binding model is commonly used to fit the kinetics data and derive rate constants. However, in certain contexts it is necessary to immobilize the antigen to the chip and flow the antibodies in solution. One such scenario is the screening of monoclonal antibodies (mAbs) for breadth against a range of antigens, where a bivalent analyte binding model is required to adequately describe the kinetics data unless antigen immobilizaion density is optimized to eliminate avidity effects. A bivalent analyte model is offered in several existing software packages intended for standard throughput SPR instruments, but lacking for high throughput SPR instruments. Existing methods also do not explore multiple local minima and parameter identifiability, issues common in non-linear optimization. Here, we have developed a method for analyzing bivalent analyte binding kinetics directly applicable to high throughput SPR data collected in a non-regenerative fashion, and have included a grid search on initial parameter values and a profile likelihood method to determine parameter identifiability. We fit the data of a broadly neutralizing HIV-1 mAb binding to HIV-1 envelope glycoprotein gp120 to a system of ordinary differential equations modeling bivalent binding. Our identifiability analysis discovered a non-identifiable parameter when data is collected under the standard experimental design for monitoring the association and dissociation phases. We used simulations to determine an improved experimental design, which when executed, resulted in the reliable estimation of all rate constants. These methods will be valuable tools in analyzing the binding of mAbs to an array of antigens to expedite therapeutic antibody discovery research.2Author summaryWhile commercial software programs for the analysis of bivalent analyte binding kinetics are available for low-throughput instruments, they cannot be easily applied to data generated by high-throughput instruments, particularly when the chip surface is not regenerated between titration cycles. Further, existing software does not address common issues in fitting non-linear systems of ordinary differential equations (ODEs) such as optimizations getting trapped in local minima or parameters that are not identifiable. In this work, we introduce a pipeline for analysis of bivalent analyte binding kinetics that 1) allows for the use of high-throughput, non-regenerative experimental designs, 2) optimizes using several sets of initial parameter values to ensure that the algorithm is able to reach the lowest minimum error and 3) applies a profile likelihood method to explore parameter identifiability. In our experimental application of the method, we found that one of the kinetics parameters (kd2) cannot be reliably estimated with the standard length of the dissociation phase. Using simulation and identifiability analysis we determined the optimal length of dissociation so that the parameter can be reliably estimated, saving time and reagents. These methodologies offer robust determination of the kinetics parameters for high-throughput bivalent analyte SPR experiments.
- Published
- 2022
- Full Text
- View/download PDF
4. Focused goodness of fit tests for gene set analyses
- Author
-
Mengqi Zhang, Matthew B. Harms, David Goldstein, Janice M. McCarthy, Cristiane de Araújo Martins Moreno, Sahar Gelfman, and Andrew S. Allen
- Subjects
Computer science ,Null (mathematics) ,Amyotrophic Lateral Sclerosis ,computer.software_genre ,Set (abstract data type) ,Phenotype ,Goodness of fit ,Exome Sequencing ,Trait ,Null distribution ,Humans ,Detection theory ,Data mining ,Genetic Testing ,Molecular Biology ,computer ,Exome sequencing ,Statistic ,Information Systems - Abstract
Gene set-based signal detection analyses are used to detect an association between a trait and a set of genes by accumulating signals across the genes in the gene set. Since signal detection is concerned with identifying whether any of the genes in the gene set are non-null, a goodness-of-fit (GOF) test can be used to compare whether the observed distribution of gene-level tests within the gene set agrees with the theoretical null distribution. Here, we present a flexible gene set-based signal detection framework based on tail-focused GOF statistics. We show that the power of the various statistics in this framework depends critically on two parameters: the proportion of genes within the gene set that are non-null and the degree of separation between the null and alternative distributions of the gene-level tests. We give guidance on which statistic to choose for a given situation and implement the methods in a fast and user-friendly R package, wHC (https://github.com/mqzhanglab/wHC). Finally, we apply these methods to a whole exome sequencing study of amyotrophic lateral sclerosis.
- Published
- 2021
5. Exploiting expression patterns across multiple tissues to map expression quantitative trait loci.
- Author
-
Chaitanya R. Acharya, Janice M. McCarthy, Kouros Owzar, and Andrew S. Allen
- Published
- 2016
- Full Text
- View/download PDF
6. Bridging the gaps: using an NHP model to predict single dose radiation absorption in humans
- Author
-
Edwin S. Iversen, Janice M. McCarthy, Holly K. Dressman, Joel R. Ross, Nelson J. Chao, Kirsten Bell Burdett, Gary Phillips, and Gary Bruce Lipton
- Subjects
education.field_of_study ,Models, Statistical ,Bridging (networking) ,Materials science ,Radiological and Ultrasound Technology ,Population ,Absorption, Radiation ,Bayes Theorem ,Dose-Response Relationship, Radiation ,Radiation Exposure ,Macaca mulatta ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Species Specificity ,Biodosimetry ,030220 oncology & carcinogenesis ,Animals ,Humans ,Radiology, Nuclear Medicine and imaging ,Transcriptome ,education ,Bone Marrow Transplantation ,Biomedical engineering - Abstract
Purpose: Design and characterization of a radiation biodosimetry device are complicated by the fact that the requisite data are not available in the intended use population, namely humans e...
- Published
- 2018
- Full Text
- View/download PDF
7. Efficient estimation of grouped survival models
- Author
-
Janice M. McCarthy, Tracy Truong, Jiaxing Lin, Yu Jiang, Deanna L. Kroetz, Kouros Owzar, Alexander B. Sibley, Andrew S. Allen, Zhiguo Li, and Katherina C. Chua
- Subjects
Computer science ,Statistics as Topic ,Efficient score ,computer.software_genre ,Biochemistry ,Mathematical Sciences ,0302 clinical medicine ,Gene Frequency ,Models ,Genome-wide analysis ,Structural Biology ,Multiple testing ,lcsh:QH301-705.5 ,Cancer ,Likelihood Functions ,0303 health sciences ,Applied Mathematics ,Grouped data ,Biological Sciences ,3. Good health ,Computer Science Applications ,Benchmarking ,Phenotype ,030220 oncology & carcinogenesis ,lcsh:R858-859.7 ,Data mining ,Bioinformatics ,lcsh:Computer applications to medicine. Medical informatics ,Heritability ,03 medical and health sciences ,Genetic ,Information and Computing Sciences ,Breast Cancer ,Covariate ,Genetics ,Humans ,Molecular Biology ,Survival analysis ,030304 developmental biology ,Data collection ,Models, Genetic ,Human Genome ,Score statistic ,lcsh:Biology (General) ,Discrete censoring ,Multiple comparisons problem ,Pharmacogenomics ,computer ,Software ,Genome-Wide Association Study - Abstract
Background Time- and dose-to-event phenotypes used in basic science and translational studies are commonly measured imprecisely or incompletely due to limitations of the experimental design or data collection schema. For example, drug-induced toxicities are not reported by the actual time or dose triggering the event, but rather are inferred from the cycle or dose to which the event is attributed. This exemplifies a prevalent type of imprecise measurement called grouped failure time, where times or doses are restricted to discrete increments. Failure to appropriately account for the grouped nature of the data, when present, may lead to biased analyses. Results We present groupedSurv, an R package which implements a statistically rigorous and computationally efficient approach for conducting genome-wide analyses based on grouped failure time phenotypes. Our approach accommodates adjustments for baseline covariates, and analysis at the variant or gene level. We illustrate the statistical properties of the approach and computational performance of the package by simulation. We present the results of a reanalysis of a published genome-wide study to identify common germline variants associated with the risk of taxane-induced peripheral neuropathy in breast cancer patients. Conclusions groupedSurv enables fast and rigorous genome-wide analysis on the basis of grouped failure time phenotypes at the variant, gene or pathway level. The package is freely available under a public license through the Comprehensive R Archive Network. Electronic supplementary material The online version of this article (10.1186/s12859-019-2899-x) contains supplementary material, which is available to authorized users.
- Published
- 2019
- Full Text
- View/download PDF
8. Incorporating prior information into signal-detection analyses across biologically informed gene-sets
- Author
-
Sahar Gelfman, David Goldstein, Janice M. McCarthy, Mengqi Zhang, and Andrew S. Allen
- Subjects
Negative selection ,Detection theory ,Signal Detection Analyses ,Disease ,Computational biology ,Biology ,Set (psychology) ,Gene ,Phenotype ,Prior information - Abstract
Signal detection analyses are used to assess whether there is any evidence of signal within a large collection of hypotheses. For example, we may wish to assess whether there is any evidence of association with disease among a set of biologically related genes. Such an analysis typically treats all genes within the sets similarly, even though there is substantial information concerning the likely importance of each gene within each set. For example, deleterious variants within genes that show evidence of purifying selection are more likely to substantially affect the phenotype than genes that are not under purifying selection, at least for traits that are themselves subject to purifying selection. Here we improve such analyses by incorporating prior information into a higher-criticism-based signal detection analysis. We show that when this prior information is predictive of whether a gene is associated with disease, our approach can lead to a significant increase in power. We illustrate our approach with a gene-set analysis of amyotrophic lateral sclerosis (ALS), which implicates a number of gene-sets containing SOD1 and NEK1 as well as showing enrichment of small p-values for gene-sets containing known ALS genes.
- Published
- 2019
- Full Text
- View/download PDF
9. Robust analysis of secondary phenotypes in case-control genetic association studies
- Author
-
Josée Dupuis, James B. Meigs, Xihong Lin, Andrew S. Allen, Chuanhua Xing, L. Adrienne Cupples, and Janice M. McCarthy
- Subjects
0301 basic medicine ,Statistics and Probability ,education.field_of_study ,Epidemiology ,Computer science ,Population ,Inference ,Sample (statistics) ,Estimating equations ,01 natural sciences ,010104 statistics & probability ,03 medical and health sciences ,030104 developmental biology ,Framingham Heart Study ,Inverse probability ,Statistics ,Econometrics ,0101 mathematics ,education ,Sampling bias ,Genetic association - Abstract
The case-control study is a common design for assessing the association between genetic exposures and a disease phenotype. Though association with a given (case-control) phenotype is always of primary interest, there is often considerable interest in assessing relationships between genetic exposures and other (secondary) phenotypes. However, the case-control sample represents a biased sample from the general population. As a result, if this sampling framework is not correctly taken into account, analyses estimating the effect of exposures on secondary phenotypes can be biased leading to incorrect inference. In this paper, we address this problem and propose a general approach for estimating and testing the population effect of a genetic variant on a secondary phenotype. Our approach is based on inverse probability weighted estimating equations, where the weights depend on genotype and the secondary phenotype. We show that, though slightly less efficient than a full likelihood-based analysis when the likelihood is correctly specified, it is substantially more robust to model misspecification, and can out-perform likelihood-based analysis, both in terms of validity and power, when the model is misspecified. We illustrate our approach with an application to a case-control study extracted from the Framingham Heart Study. Copyright © 2016 John Wiley & Sons, Ltd.
- Published
- 2016
- Full Text
- View/download PDF
10. Robust analysis of secondary phenotypes in case-control genetic association studies
- Author
-
Chuanhua, Xing, Janice, M McCarthy, Josée, Dupuis, L, Adrienne Cupples, James, B Meigs, Xihong, Lin, and Andrew, S Allen
- Subjects
Likelihood Functions ,Phenotype ,Models, Genetic ,Case-Control Studies ,Humans ,Polymorphism, Single Nucleotide ,Genetic Association Studies ,Article ,Genome-Wide Association Study - Abstract
The case-control study is a common design for assessing the association between genetic exposures and a disease phenotype. Though association with a given (case-control) phenotype is always of primary interest, there is often considerable interest in assessing relationships between genetic exposures and other (secondary) phenotypes. However, the case-control sample represents a biased sample from the general population. As a result, if this sampling framework is not correctly taken into account, analyses estimating the effect of exposures on secondary phenotypes can be biased leading to incorrect inference. In this paper, we address this problem and propose a general approach for estimating and testing the population effect of a genetic variant on a secondary phenotype. Our approach is based on inverse probability weighted estimating equations, where the weights depend on genotype and the secondary phenotype. We show that, though slightly less efficient than a full likelihood-based analysis when the likelihood is correctly specified, it is substantially more robust to model misspecification, and can out-perform likelihood-based analysis, both in terms of validity and power, when the model is misspecified. We illustrate our approach with an application to a case-control study extracted from the Framingham Heart Study. Copyright © 2016 John WileySons, Ltd.
- Published
- 2015
11. Testing the effect of rare compound-heterozygous and recessive mutations in case--parent sequencing studies
- Author
-
Andrew S. Allen, Yu Jiang, and Janice M. McCarthy
- Subjects
Genetics ,Parents ,Heterozygote ,Models, Genetic ,Epidemiology ,Genes, Recessive ,Transmission disequilibrium test ,Sequence Analysis, DNA ,Biology ,Population stratification ,Compound heterozygosity ,Linkage Disequilibrium ,Exact test ,Quantitative Trait, Heritable ,Sample size determination ,Mutation ,Test statistic ,Null distribution ,Humans ,Computer Simulation ,Genetic Predisposition to Disease ,Genetic Testing ,Gene ,Genetics (clinical) ,Genetic Association Studies - Abstract
Compound heterozygous mutations are mutations that occur on different copies of genes and may completely “knock-out” gene function. Compound heterozygous mutations have been implicated in a large number of diseases, but there are few statistical methods for analyzing their role in disease, especially when such mutations are rare. A major barrier is that phase information is required to determine whether both gene copies are affected and phasing rare variants is difficult. Here, we propose a method to test compound heterozygous and recessive disease models in case–parent trios. We propose a simple algorithm for phasing and show via simulations that tests based on phased trios have almost the same power as tests using true phase information. A further complication in the study of compound heterozygous mutations is that only families where both parents carry mutations are informative. Thus, the informative sample size will be quite small even when the overall sample size is not, making asymptotic approximations of the null distribution of the test statistic inappropriate. To address this, we develop an exact test that will give appropriate P-values regardless of sample size. Using simulation, we show that our method is robust to population stratification and significantly outperforms other methods when the causal model is recessive.
- Published
- 2014
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.