68 results on '"Robert Nadon"'
Search Results
2. Four erroneous beliefs thwarting more trustworthy research
- Author
-
Mark Yarborough, Robert Nadon, and David G Karlin
- Subjects
trustworthy research ,quality improvement ,research reforms ,reproducibility ,point of view ,Medicine ,Science ,Biology (General) ,QH301-705.5 - Abstract
A range of problems currently undermines public trust in biomedical research. We discuss four erroneous beliefs that may prevent the biomedical research community from recognizing the need to focus on deserving this trust, and thus which act as powerful barriers to necessary improvements in the research process.
- Published
- 2019
- Full Text
- View/download PDF
3. Exact replication: Foundation of science or game of chance?
- Author
-
Sophie K Piper, Ulrike Grittner, Andre Rex, Nico Riedel, Felix Fischer, Robert Nadon, Bob Siegerink, and Ulrich Dirnagl
- Subjects
Biology (General) ,QH301-705.5 - Abstract
The need for replication of initial results has been rediscovered only recently in many fields of research. In preclinical biomedical research, it is common practice to conduct exact replications with the same sample sizes as those used in the initial experiments. Such replication attempts, however, have lower probability of replication than is generally appreciated. Indeed, in the common scenario of an effect just reaching statistical significance, the statistical power of the replication experiment assuming the same effect size is approximately 50%-in essence, a coin toss. Accordingly, we use the provocative analogy of "replicating" a neuroprotective drug animal study with a coin flip to highlight the need for larger sample sizes in replication experiments. Additionally, we provide detailed background for the probability of obtaining a significant p value in a replication experiment and discuss the variability of p values as well as pitfalls of simple binary significance testing in both initial preclinical experiments and replication studies with small sample sizes. We conclude that power analysis for determining the sample size for a replication study is obligatory within the currently dominant hypothesis testing framework. Moreover, publications should include effect size point estimates and corresponding measures of precision, e.g., confidence intervals, to allow readers to assess the magnitude and direction of reported effects and to potentially combine the results of initial and replication study later through Bayesian or meta-analytic approaches.
- Published
- 2019
- Full Text
- View/download PDF
4. Translation control during prolonged mTORC1 inhibition mediated by 4E-BP3
- Author
-
Yoshinori Tsukumo, Tommy Alain, Bruno D. Fonseca, Robert Nadon, and Nahum Sonenberg
- Subjects
Science - Abstract
The eIF4E-binding proteins (4E-BPs) are critical repressors of cap-dependent translation via mTOR, a pathway frequently hyperactivated in cancer. Here the authors show that 4E-BP3 specifically mediates the cap-dependent translation repression and antiproliferative effects of prolonged pharmacological mTOR inhibition.
- Published
- 2016
- Full Text
- View/download PDF
5. Acacetin and Chrysin, Two Polyphenolic Compounds, Alleviate Telomeric Position Effect in Human Cells
- Author
-
Amina Boussouar, Caroline Barette, Robert Nadon, Adelaïde Saint-Léger, Natacha Broucqsault, Alexandre Ottaviani, Arva Firozhoussen, Yiming Lu, Laurence Lafanechère, Eric Gilson, Frédérique Magdinier, and Jing Ye
- Subjects
DNA damage response ,flavanoid ,polyphenol ,telomere ,telomeric position effect ,telomere-induced foci ,Therapeutics. Pharmacology ,RM1-950 - Abstract
We took advantage of the ability of human telomeres to silence neighboring genes (telomere position effect or TPE) to design a high-throughput screening assay for drugs altering telomeres. We identified, for the first time, that two dietary flavones, acacetin and chrysin, are able to specifically alleviate TPE in human cells. We further investigated their influence on telomere integrity and showed that both drugs drastically deprotect telomeres against DNA damage response. However, telomere deprotection triggered by shelterin dysfunction does not affect TPE, indicating that acacetin and chrysin target several functions of telomeres. These results show that TPE-based screening assays represent valuable methods to discover new compounds targeting telomeres.
- Published
- 2013
- Full Text
- View/download PDF
6. Vesicoureteral reflux and other urinary tract malformations in mice compound heterozygous for Pax2 and Emx2.
- Author
-
Sami K Boualia, Yaned Gaitan, Inga Murawski, Robert Nadon, Indra R Gupta, and Maxime Bouchard
- Subjects
Medicine ,Science - Abstract
Congenital anomalies of the kidney and urinary tract (CAKUT) are the most common cause of chronic kidney disease in children. This disease group includes a spectrum of urinary tract defects including vesicoureteral reflux, duplex kidneys and other developmental defects that can be found alone or in combination. To identify new regulators of CAKUT, we tested the genetic cooperativity between several key regulators of urogenital system development in mice. We found a high incidence of urinary tract anomalies in Pax2;Emx2 compound heterozygous mice that are not found in single heterozygous mice. Pax2⁺/⁻;Emx2⁺/⁻ mice harbor duplex systems associated with urinary tract obstruction, bifid ureter and a high penetrance of vesicoureteral reflux. Remarkably, most compound heterozygous mice refluxed at low intravesical pressure. Early analysis of Pax2⁺/⁻;Emx2⁺/⁻ embryos point to ureter budding defects as the primary cause of urinary tract anomalies. We additionally establish Pax2 as a direct regulator of Emx2 expression in the Wolffian duct. Together, these results identify a haploinsufficient genetic combination resulting in CAKUT-like phenotype, including a high sensitivity to vesicoureteral reflux. As both genes are located on human chromosome 10q, which is lost in a proportion of VUR patients, these findings may help understand VUR and CAKUT in humans.
- Published
- 2011
- Full Text
- View/download PDF
7. A New Effective Method for Elimination of Systematic Error in Experimental High-Throughput Screening.
- Author
-
Vladimir Makarenkov, Plamen Dragiev, and Robert Nadon
- Published
- 2013
- Full Text
- View/download PDF
8. Accurate deep learning off-target prediction with novel sgRNA-DNA sequence encoding in CRISPR-Cas9 gene editing
- Author
-
Robert Nadon, Jeremy Charlier, and Vladimir Makarenkov
- Subjects
Statistics and Probability ,0303 health sciences ,Computer science ,business.industry ,Deep learning ,Pattern recognition ,Biochemistry ,Convolutional neural network ,Field (computer science) ,Computer Science Applications ,Random forest ,03 medical and health sciences ,Computational Mathematics ,Naive Bayes classifier ,0302 clinical medicine ,Recurrent neural network ,Computational Theory and Mathematics ,Encoding (memory) ,Feedforward neural network ,Artificial intelligence ,business ,Molecular Biology ,030217 neurology & neurosurgery ,030304 developmental biology - Abstract
Motivation Off-target predictions are crucial in gene editing research. Recently, significant progress has been made in the field of prediction of off-target mutations, particularly with CRISPR-Cas9 data, thanks to the use of deep learning. CRISPR-Cas9 is a gene editing technique which allows manipulation of DNA fragments. The sgRNA-DNA (single guide RNA-DNA) sequence encoding for deep neural networks, however, has a strong impact on the prediction accuracy. We propose a novel encoding of sgRNA-DNA sequences that aggregates sequence data with no loss of information. Results In our experiments, we compare the proposed sgRNA-DNA sequence encoding applied in a deep learning prediction framework with state-of-the-art encoding and prediction methods. We demonstrate the superior accuracy of our approach in a simulation study involving Feedforward Neural Networks (FNNs), Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) as well as the traditional Random Forest (RF), Naive Bayes (NB) and Logistic Regression (LR) classifiers. We highlight the quality of our results by building several FNNs, CNNs and RNNs with various layer depths and performing predictions on two popular gene editing datasets (CRISPOR and GUIDE-seq). In all our experiments, the new encoding led to more accurate off-target prediction results, providing an improvement of the area under the Receiver Operating Characteristic (ROC) curve up to 35%. Availability and implementation The code and data used in this study are available at: https://github.com/dagrate/dl-offtarget. Supplementary information Supplementary data are available at Bioinformatics online.
- Published
- 2020
9. Detecting and removing multiplicative spatial bias in high-throughput screening technologies
- Author
-
Vladimir Makarenkov, Iurie Caraus, Bogdan Mazoure, and Robert Nadon
- Subjects
0301 basic medicine ,Statistics and Probability ,Accuracy and precision ,Computer science ,False positives and false negatives ,HIV Infections ,Toxicology ,computer.software_genre ,Biochemistry ,03 medical and health sciences ,Bias ,Drug Discovery ,Humans ,Bias correction ,Molecular Biology ,Protocol (science) ,Multiplicative function ,Computational Biology ,Experimental data ,High-Throughput Screening Assays ,3. Good health ,Computer Science Applications ,Computational Mathematics ,030104 developmental biology ,Computational Theory and Mathematics ,Data quality ,Biological Assay ,Data mining ,computer ,Software - Abstract
Motivation Considerable attention has been paid recently to improve data quality in high-throughput screening (HTS) and high-content screening (HCS) technologies widely used in drug development and chemical toxicity research. However, several environmentally- and procedurally-induced spatial biases in experimental HTS and HCS screens decrease measurement accuracy, leading to increased numbers of false positives and false negatives in hit selection. Although effective bias correction methods and software have been developed over the past decades, almost all of these tools have been designed to reduce the effect of additive bias only. Here, we address the case of multiplicative spatial bias. Results We introduce three new statistical methods meant to reduce multiplicative spatial bias in screening technologies. We assess the performance of the methods with synthetic and real data affected by multiplicative spatial bias, including comparisons with current bias correction methods. We also describe a wider data correction protocol that integrates methods for removing both assay and plate-specific spatial biases, which can be either additive or multiplicative. Conclusions The methods for removing multiplicative spatial bias and the data correction protocol are effective in detecting and cleaning experimental data generated by screening technologies. As our protocol is of a general nature, it can be used by researchers analyzing current or next-generation high-throughput screens. Availability and implementation The AssayCorrector program, implemented in R, is available on CRAN. Supplementary information Supplementary data are available at Bioinformatics online.
- Published
- 2017
- Full Text
- View/download PDF
10. Statistics and Biology: Not Your Average Relationship
- Author
-
Paul S. Kayne and Robert Nadon
- Subjects
Text mining ,business.industry ,Humans ,Molecular Medicine ,Biostatistics ,business ,Biology ,Biochemistry ,Data science ,Analytical Chemistry ,Biotechnology - Published
- 2018
- Full Text
- View/download PDF
11. Author response: Four erroneous beliefs thwarting more trustworthy research
- Author
-
David G. Karlin, Mark A Yarborough, and Robert Nadon
- Subjects
Trustworthiness ,business.industry ,Internet privacy ,Psychology ,business - Published
- 2019
- Full Text
- View/download PDF
12. Identification and Correction of Additive and Multiplicative Spatial Biases in Experimental High-Throughput Screening
- Author
-
Vladimir Makarenkov, Bogdan Mazoure, Robert Nadon, and Iurie Caraus
- Subjects
Anderson–Darling test ,Computer science ,Intersection (set theory) ,High-throughput screening ,Multiplicative function ,Row and column spaces ,Biochemistry ,Analytical Chemistry ,High-Throughput Screening Assays ,Small Molecule Libraries ,Identification (information) ,Bias ,Cramér–von Mises criterion ,High-content screening ,Drug Discovery ,Molecular Medicine ,Algorithm ,Databases, Chemical ,Biotechnology - Abstract
Data generated by high-throughput screening (HTS) technologies are prone to spatial bias. Traditionally, bias correction methods used in HTS assume either a simple additive or, more recently, a simple multiplicative spatial bias model. These models do not, however, always provide an accurate correction of measurements in wells located at the intersection of rows and columns affected by spatial bias. The measurements in these wells depend on the nature of interaction between the involved biases. Here, we propose two novel additive and two novel multiplicative spatial bias models accounting for different types of bias interactions. We describe a statistical procedure that allows for detecting and removing different types of additive and multiplicative spatial biases from multiwell plates. We show how this procedure can be applied by analyzing data generated by the four HTS technologies (homogeneous, microorganism, cell-based, and gene expression HTS), the three high-content screening (HCS) technologies (area, intensity, and cell-count HCS), and the only small-molecule microarray technology available in the ChemBank small-molecule screening database. The proposed methods are included in the AssayCorrector program, implemented in R, and available on CRAN.
- Published
- 2018
13. Translational control of depression-like behavior via phosphorylation of eukaryotic translation initiation factor 4E
- Author
-
Christoph Rummel, Mohammad J. Eslamizade, Arnaud Tanti, Naguib Mechawar, Nahum Sonenberg, Vijendra Sharma, Gustavo Turecki, Stefano Comai, Jelena Popic, Edna Matta-Camacho, Giamal N. Luheshi, Nabila Haji, Robert Nadon, Shane Wiebe, Ruifeng Cao, Danilo De Gregorio, Argel Aguilar-Valles, Nicolas A. Nuñez, Gabriella Gobbi, Jean-Claude Lacaille, Aguilar-Valles, Argel, Haji, Nabila, De Gregorio, Danilo, Matta-Camacho, Edna, Eslamizade, Mohammad J., Popic, Jelena, Sharma, Vijendra, Cao, Ruifeng, Rummel, Christoph, Tanti, Arnaud, Wiebe, Shane, Nuñez, Nicola, Comai, Stefano, Nadon, Robert, Luheshi, Giamal, Mechawar, Naguib, Turecki, Gustavo, Lacaille, Jean-Claude, Gobbi, Gabriella, and Sonenberg, Nahum
- Subjects
0301 basic medicine ,Male ,Eukaryotic Initiation Factor-4E ,Serotonin and Noradrenaline Reuptake Inhibitor ,General Physics and Astronomy ,Protein-Serine-Threonine Kinase ,Anxiety ,Synaptic Transmission ,Mice ,0302 clinical medicine ,NF-KappaB Inhibitor alpha ,Phosphorylation ,Serotonin and Noradrenaline Reuptake Inhibitors ,Mice, Knockout ,Multidisciplinary ,Behavior, Animal ,Chemistry ,Depression ,EIF4E ,Chemistry (all) ,Antidepressive Agents ,3. Good health ,Antidepressive Agent ,Tumor necrosis factor alpha ,Female ,Ketamine ,medicine.medical_specialty ,Science ,Citalopram ,Protein Serine-Threonine Kinases ,General Biochemistry, Genetics and Molecular Biology ,Article ,03 medical and health sciences ,Physics and Astronomy (all) ,Dorsal raphe nucleus ,Internal medicine ,Fluoxetine ,medicine ,Animals ,Protein kinase A ,Benzofurans ,Inflammation ,Messenger RNA ,Depressive Disorder, Major ,Biochemistry, Genetics and Molecular Biology (all) ,Protein Biosynthesi ,Animal ,Tumor Necrosis Factor-alpha ,General Chemistry ,Mice, Inbred C57BL ,IκBα ,030104 developmental biology ,Endocrinology ,Protein Biosynthesis ,Benzofuran ,030217 neurology & neurosurgery - Abstract
Translation of mRNA into protein has a fundamental role in neurodevelopment, plasticity, and memory formation; however, its contribution in the pathophysiology of depressive disorders is not fully understood. We investigated the involvement of MNK1/2 (MAPK-interacting serine/threonine-protein kinase 1 and 2) and their target, eIF4E (eukaryotic initiation factor 4E), in depression-like behavior in mice. Mice carrying a mutation in eIF4E for the MNK1/2 phosphorylation site (Ser209Ala, Eif4e ki/ki), the Mnk1/2 double knockout mice (Mnk1/2−/−), or mice treated with the MNK1/2 inhibitor, cercosporamide, displayed anxiety- and depression-like behaviors, impaired serotonin-induced excitatory synaptic activity in the prefrontal cortex, and diminished firing of the dorsal raphe neurons. In Eif4e ki/ki mice, brain IκBα, was decreased, while the NF-κB target, TNFα was elevated. TNFα inhibition in Eif4e ki/ki mice rescued, whereas TNFα administration to wild-type mice mimicked the depression-like behaviors and 5-HT synaptic deficits. We conclude that eIF4E phosphorylation modulates depression-like behavior through regulation of inflammatory responses., Translation of mRNA contributes to neuronal function and complex behaviours, and inflammation is thought to contribute to depression. Here the authors show that mice lacking phosphorylation sites in eIF4E (eukaryotic initiation factor 4E) display anxiety- and depression-like behaviour and decreased IkBα expression; furthermore TNFα delivery to the medial prefrontal cortex induces depression-like behaviour and deficits in serotonergic transmission.
- Published
- 2018
14. Identification and correction of spatial bias are essential for obtaining quality data in high-throughput screening technologies
- Author
-
Vladimir Makarenkov, Bogdan Mazoure, and Robert Nadon
- Subjects
0301 basic medicine ,Multidisciplinary ,Computer science ,High-throughput screening ,media_common.quotation_subject ,lcsh:R ,lcsh:Medicine ,computer.software_genre ,Small molecule ,Article ,3. Good health ,03 medical and health sciences ,Identification (information) ,030104 developmental biology ,Data quality ,lcsh:Q ,Quality (business) ,Data mining ,lcsh:Science ,computer ,Spatial bias ,media_common - Abstract
Spatial bias continues to be a major challenge in high-throughput screening technologies. Its successful detection and elimination are critical for identifying the most promising drug candidates. Here, we examine experimental small molecule assays from the popular ChemBank database and show that screening data are widely affected by both assay-specific and plate-specific spatial biases. Importantly, the bias affecting screening data can fit an additive or multiplicative model. We show that the use of appropriate statistical methods is essential for improving the quality of experimental screening data. The presented methodology can be recommended for the analysis of current and next-generation screening data.
- Published
- 2017
- Full Text
- View/download PDF
15. Novel Protein Interactions with Endoglin and Activin Receptor-like Kinase 1: Potential Role in Vascular Networks
- Author
-
Andrei L. Turinsky, Sonia Vera, Miriam Barrios-Rodiles, Robert Nadon, Jeffrey L. Wrana, Despina Voulgaraki, Michelle Letarte, Guoxiong Xu, Mourad Toporsian, and Mirjana Jerkic
- Subjects
Activin Receptors, Type II ,GDF2 ,Receptors, Cell Surface ,Biochemistry ,Analytical Chemistry ,Mice ,Antigens, CD ,Transforming Growth Factor beta ,hemic and lymphatic diseases ,otorhinolaryngologic diseases ,Animals ,Humans ,Protein Interaction Maps ,Receptor ,Molecular Biology ,Mice, Knockout ,biology ,Research ,HEK 293 cells ,Endoglin ,ACVRL1 ,Transforming growth factor beta ,Protein phosphatase 2 ,Activin receptor ,Embryo, Mammalian ,HEK293 Cells ,biology.protein ,Cancer research ,Blood Vessels ,Telangiectasia, Hereditary Hemorrhagic ,Endothelium, Vascular ,Protein Binding - Abstract
Endoglin and activin receptor-like kinase 1 are specialized transforming growth factor-beta (TGF-β) superfamily receptors, primarily expressed in endothelial cells. Mutations in the corresponding ENG or ACVRL1 genes lead to hereditary hemorrhagic telangiectasia (HHT1 and HHT2 respectively). To discover proteins interacting with endoglin, ACVRL1 and TGF-β receptor type 2 and involved in TGF-β signaling, we applied LUMIER, a high-throughput mammalian interactome mapping technology. Using stringent criteria, we identified 181 novel unique and shared interactions with ACVRL1, TGF-β receptor type 2, and endoglin, defining potential novel important vascular networks. In particular, the regulatory subunit B-beta of the protein phosphatase PP2A (PPP2R2B) interacted with all three receptors. Interestingly, the PPP2R2B gene lies in an interval in linkage disequilibrium with HHT3, for which the gene remains unidentified. We show that PPP2R2B protein interacts with the ACVRL1/TGFBR2/endoglin complex and recruits PP2A to nitric oxide synthase 3 (NOS3). Endoglin overexpression in endothelial cells inhibits the association of PPP2R2B with NOS3, whereas endoglin-deficient cells show enhanced PP2A-NOS3 interaction and lower levels of endogenous NOS3 Serine 1177 phosphorylation. Our data suggest that endoglin regulates NOS3 activation status by regulating PPP2R2B access to NOS3, and that PPP2R2B might be the HHT3 gene. Furthermore, endoglin and ACVRL1 contribute to several novel networks, including TGF-β dependent and independent ones, critical for vascular function and potentially defective in HHT.
- Published
- 2014
- Full Text
- View/download PDF
16. Single assay-wide variance experimental (SAVE) design for high-throughput screening
- Author
-
Laurence Lafanechère, Robert Nadon, Caroline Barette, Carl Murie, McGill University and Genome Quebec Innovation Centre, Groupe Plateforme et Moyens Scientifiques et techniques communs / Centre de Criblage pour Molécules Bio-Actives (GPMS / CMBA), Commissariat à l'énergie atomique et aux énergies alternatives (CEA), Genetics and Chemogenomics (GenChem), Laboratoire de Biologie à Grande Échelle (BGE - UMR S1038), Institut National de la Santé et de la Recherche Médicale (INSERM)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut de Recherche Interdisciplinaire de Grenoble (IRIG), Direction de Recherche Fondamentale (CEA) (DRF (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Direction de Recherche Fondamentale (CEA) (DRF (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut de Recherche Interdisciplinaire de Grenoble (IRIG), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA), Genome Quebec Innovation Center, Dept Human Genetics, and McGill University = Université McGill [Montréal, Canada]
- Subjects
Statistics and Probability ,False discovery rate ,[SDV.OT]Life Sciences [q-bio]/Other [q-bio.OT] ,Computer science ,High-throughput screening ,computer.software_genre ,01 natural sciences ,Biochemistry ,03 medical and health sciences ,Bayes' theorem ,Replication (statistics) ,Computer Simulation ,Molecular Biology ,ComputingMilieux_MISCELLANEOUS ,030304 developmental biology ,Statistical hypothesis testing ,0303 health sciences ,Bayes Theorem ,Variance (accounting) ,Models, Theoretical ,High-Throughput Screening Assays ,0104 chemical sciences ,Computer Science Applications ,010404 medicinal & biomolecular chemistry ,Computational Mathematics ,Pharmaceutical Preparations ,Computational Theory and Mathematics ,Research Design ,Simulated data ,Random error ,Data mining ,[STAT.ME]Statistics [stat]/Methodology [stat.ME] ,computer - Abstract
Motivation: Advantages of statistical testing of high-throughput screens include P-values, which provide objective benchmarks of compound activity, and false discovery rate estimation. The cost of replication required for statistical testing, however, may often be prohibitive. We introduce the single assay-wide variance experimental (SAVE) design whereby a small replicated subset of an entire screen is used to derive empirical Bayes random error estimates, which are applied to the remaining majority of unreplicated measurements. Results: The SAVE design is able to generate P-values comparable with those generated with full replication data. It performs almost as well as the random variance model t-test with duplicate data and outperforms the commonly used Z-scores with unreplicated data and the standard t-test. We illustrate the approach with simulated data and with experimental small molecule and small interfering RNA screens. The SAVE design provides substantial performance improvements over unreplicated screens with only slight increases in cost. Contact: robert.nadon@mcgill.ca Supplementary information: Supplementary data are available at Bioinformatics online.
- Published
- 2013
- Full Text
- View/download PDF
17. Translation control during prolonged mTORC1 inhibition mediated by 4E-BP3
- Author
-
Robert Nadon, Bruno D. Fonseca, Yoshinori Tsukumo, Tommy Alain, and Nahum Sonenberg
- Subjects
Male ,0301 basic medicine ,Indoles ,General Physics and Astronomy ,Cell Cycle Proteins ,mTORC1 ,Mice ,Eukaryotic initiation factor ,Databases, Genetic ,Eukaryotic Initiation Factors ,Gene Editing ,Regulation of gene expression ,Antibiotics, Antineoplastic ,Multidisciplinary ,Basic Helix-Loop-Helix Leucine Zipper Transcription Factors ,Effector ,TOR Serine-Threonine Kinases ,Translation (biology) ,Hep G2 Cells ,Gene Expression Regulation, Neoplastic ,MCF-7 Cells ,Female ,biological phenomena, cell phenomena, and immunity ,Signal transduction ,Signal Transduction ,medicine.drug ,Science ,Breast Neoplasms ,Biology ,Article ,General Biochemistry, Genetics and Molecular Biology ,03 medical and health sciences ,medicine ,Animals ,Humans ,Transcription factor ,Adaptor Proteins, Signal Transducing ,Cell Proliferation ,Sirolimus ,General Chemistry ,Phosphoproteins ,Survival Analysis ,Mice, Inbred C57BL ,030104 developmental biology ,Purines ,Protein Biosynthesis ,Cancer research ,CRISPR-Cas Systems ,Carrier Proteins ,HeLa Cells - Abstract
Targeting mTORC1 is a highly promising strategy in cancer therapy. Suppression of mTORC1 activity leads to rapid dephosphorylation of eIF4E-binding proteins (4E-BP1–3) and subsequent inhibition of mRNA translation. However, how the different 4E-BPs affect translation during prolonged use of mTOR inhibitors is not known. Here we show that the expression of 4E-BP3, but not that of 4E-BP1 or 4E-BP2, is transcriptionally induced during prolonged mTORC1 inhibition in vitro and in vivo. Mechanistically, our data reveal that 4E-BP3 expression is controlled by the transcription factor TFE3 through a cis-regulatory element in the EIF4EBP3 gene promoter. CRISPR/Cas9-mediated EIF4EBP3 gene disruption in human cancer cells mitigated the inhibition of translation and proliferation caused by prolonged treatment with mTOR inhibitors. Our findings show that 4E-BP3 is an important effector of mTORC1 and a robust predictive biomarker of therapeutic response to prolonged treatment with mTOR-targeting drugs in cancer., The eIF4E-binding proteins (4E-BPs) are critical repressors of cap-dependent translation via mTOR, a pathway frequently hyperactivated in cancer. Here the authors show that 4E-BP3 specifically mediates the cap-dependent translation repression and antiproliferative effects of prolonged pharmacological mTOR inhibition.
- Published
- 2016
- Full Text
- View/download PDF
18. Control of embryonic stem cell self-renewal and differentiation via coordinated alternative splicing and translation of YY2
- Author
-
Ulrich Braunschweig, Soroush Tahmasebi, Yaser Atlasi, Yojiro Yamanaka, Robert Nadon, Nahum Sonenberg, Edna Matta-Camacho, Xiang-Jiao Yang, Thomas Gonatopoulos-Pournatzis, Christos G. Gkogkas, Hendrik G. Stunnenberg, Dana Pearl, Akiko Yanagiya, Vincent Giguère, Benjamin J. Blencowe, Guillaume Bourque, Wencheng Li, Bin Tian, Yoshinori Tsukumo, Maxime Caron, Arkady Khoutorsky, Ingrid S. Tam, and Seyed Mehdi Jafarnejad
- Subjects
0301 basic medicine ,Transcription, Genetic ,Regulator ,Biology ,Models, Biological ,Heterogeneous-Nuclear Ribonucleoproteins ,Mice ,03 medical and health sciences ,Eukaryotic initiation factor ,Transcriptional regulation ,Animals ,Cell Lineage ,RNA, Messenger ,Cell Self Renewal ,RNA, Small Interfering ,Molecular Biology ,Embryonic Stem Cells ,YY1 Transcription Factor ,Mice, Knockout ,Genetics ,Multidisciplinary ,Alternative splicing ,Intron ,Gene Expression Regulation, Developmental ,Cell Differentiation ,PTBP1 ,Biological Sciences ,Phosphoproteins ,Embryonic stem cell ,Introns ,Cell biology ,Alternative Splicing ,Blastocyst ,030104 developmental biology ,Receptors, Estrogen ,Protein Biosynthesis ,RNA splicing ,Carrier Proteins ,Octamer Transcription Factor-3 ,Polypyrimidine Tract-Binding Protein ,Transcription Factors - Abstract
Significance Embryonic stem cells (ESCs) maintain a low translation rate; therefore control of mRNA translation is critical for preserving their stemness. We identified a hitherto unstudied transcription factor, Yin-yang 2 (YY2), which is translationaly regulated and controls self-renewal and differentiation of mouse ESCs (mESCs). Although YY2 is essential for mESC self-renewal, increased YY2 expression directs differentiation of mESCs toward cardiovascular lineages. Examination of the Yy2 5′-UTR revealed a multilayer regulatory mechanism through which YY2 expression is dictated by the combined actions of the splicing regulator, Polypyrimidine tract-binding protein 1 (PTBP1), and the translation inhibitors, Eukaryotic initiation factor 4E-binding proteins (4E-BPs). YY2 directly controls the expression of several pluripotency and development-related genes. This study describes a synchronized network of alternative splicing and mRNA translation in controlling self-renewal and differentiation.
- Published
- 2016
19. Two effective methods for correcting experimental high-throughput screening data
- Author
-
Vladimir Makarenkov, Robert Nadon, and Plamen Dragiev
- Subjects
Statistics and Probability ,Systematic error ,Computer science ,Process (engineering) ,computer.software_genre ,Biochemistry ,High-Throughput Screening Assays ,Computer Science Applications ,Computational Mathematics ,Computational Theory and Mathematics ,Drug Discovery ,Computer Simulation ,Data pre-processing ,Data mining ,Error detection and correction ,Molecular Biology ,computer - Abstract
Motivation: Rapid advances in biomedical sciences and genetics have increased the pressure on drug development companies to promptly translate new knowledge into treatments for disease. Impelled by the demand and facilitated by technological progress, the number of compounds evaluated during the initial high-throughput screening (HTS) step of drug discovery process has steadily increased. As a highly automated large-scale process, HTS is prone to systematic error caused by various technological and environmental factors. A number of error correction methods have been designed to reduce the effect of systematic error in experimental HTS (Brideau et al., 2003; Carralot et al., 2012; Kevorkov and Makarenkov, 2005; Makarenkov et al., 2007; Malo et al., 2010). Despite their power to correct systematic error when it is present, the applicability of those methods in practice is limited by the fact that they can potentially introduce a bias when applied to unbiased data. We describe two new methods for eliminating systematic error from HTS data based on a prior knowledge of the error location. This information can be obtained using a specific version of the t-test or of the χ2 goodness-of-fit test as discussed in Dragiev et al. (2011). We will show that both new methods constitute an important improvement over the standard practice of not correcting for systematic error at all as well as over the B-score correction procedure (Brideau et al., 2003) which is widely used in the modern HTS. We will also suggest a more general data preprocessing framework where the new methods can be applied in combination with the Well Correction procedure (Makarenkov et al., 2007). Such a framework will allow for removing systematic biases affecting all plates of a given screen as well as those relative to some of its individual plates. Contact: makarenkov.vladimir@uqam.ca Supplementary information: Supplementary data are available at Bioinformatics online.
- Published
- 2012
- Full Text
- View/download PDF
20. Identification of differential translation in genome wide studies
- Author
-
Ola Larsson, Nahum Sonenberg, and Robert Nadon
- Subjects
Genetics ,Multidisciplinary ,Genome, Human ,Contrast (statistics) ,Translation (biology) ,Biological Sciences ,Biology ,Genome ,Gene Expression Regulation ,Protein Biosynthesis ,Polysome ,Databases, Genetic ,Protein biosynthesis ,Humans ,Human genome ,RNA, Messenger ,Ribosome profiling ,Ribosomes ,Gene ,Genome-Wide Association Study ,Oligonucleotide Array Sequence Analysis - Abstract
Regulation of gene expression through translational control is a fundamental mechanism implicated in many biological processes ranging from memory formation to innate immunity and whose dysregulation contributes to human diseases. Genome wide analyses of translational control strive to identify differential translation independent of cytosolic mRNA levels. For this reason, most studies measure genes’ translation levels as log ratios (translation levels divided by corresponding cytosolic mRNA levels obtained in parallel). Counterintuitively, arising from a mathematical necessity, these log ratios tend to be highly correlated with the cytosolic mRNA levels. Accordingly, they do not effectively correct for cytosolic mRNA level and generate substantial numbers of biological false positives and false negatives. We show that analysis of partial variance, which produces estimates of translational activity that are independent of cytosolic mRNA levels, is a superior alternative. When combined with a variance shrinkage method for estimating error variance, analysis of partial variance has the additional benefit of having greater statistical power and identifying fewer genes as translationally regulated resulting merely from unrealistically low variance estimates rather than from large changes in translational activity. In contrast to log ratios, this formal analytical approach estimates translation effects in a statistically rigorous manner, eliminates the need for inefficient and error-prone heuristics, and produces results that agree with biological function. The method is applicable to datasets obtained from both the commonly used polysome microarray method and the sequencing-based ribosome profiling method.
- Published
- 2010
- Full Text
- View/download PDF
21. Gene Expression – Time to Change Point of View?
- Author
-
Robert Nadon and Ola Larsson
- Subjects
Transcriptional Activation ,Untranslated region ,Gene Expression ,Bioengineering ,Computational biology ,Biology ,Untranslated Regions ,Gene expression ,Translational regulation ,Transcriptional regulation ,Protein biosynthesis ,Animals ,Humans ,RNA, Messenger ,Molecular Biology ,Oligonucleotide Array Sequence Analysis ,Genetics ,Regulation of gene expression ,Gene Expression Profiling ,Systems Biology ,Gene expression profiling ,Protein Biosynthesis ,Translational elongation ,Ribosomes ,Genome-Wide Association Study ,Biotechnology - Abstract
Analysis of transcription profiles has been the focus of genome wide characterization of gene expression during the last decade. Downstream of transcription, regulation of translation represents a less explored step in the gene expression pathway. Differential translation can be caused by differential ribosome recruitment, translational elongation or termination although the ribosome recruitment step is thought to be the major source of differential translation. Genome wide studies of differential translation through analysis of ribosome recruitment in a variety of model systems indicate better correlation to protein levels as compared to transcriptional regulation. These studies also indicate translational control as a major transcript specific regulation step. Here we review the current literature on genome wide regulation of ribosome recruitment. We conclude that without considering regulation of ribosome recruitment, important information regarding the links between gene expression and protein levels is lost and that ribosome recruitment will be an integral part of a systems level understanding for regulation of gene expression.
- Published
- 2008
- Full Text
- View/download PDF
22. An efficient method for the detection and elimination of systematic error in high-throughput screening
- Author
-
Robert Nadon, Dmytro Kevorkov, Nathalie Malo, Vladimir Makarenkov, Pablo Zentilli, and Andrei Gagarin
- Subjects
Statistics and Probability ,Systematic error ,Computer science ,High-throughput screening ,Drug Evaluation, Preclinical ,computer.software_genre ,Sensitivity and Specificity ,Biochemistry ,Technology, Pharmaceutical ,Bioassay ,Hit selection ,Molecular Biology ,Selection (genetic algorithm) ,business.industry ,Drug discovery ,Process (computing) ,Computer Science Applications ,Computational Mathematics ,Computational Theory and Mathematics ,Data Interpretation, Statistical ,Drug Design ,Biological Assay ,Artificial intelligence ,Data mining ,Artifacts ,business ,computer - Abstract
Motivation: High-throughput screening (HTS) is an early-stage process in drug discovery which allows thousands of chemical compounds to be tested in a single study. We report a method for correcting HTS data prior to the hit selection process (i.e. selection of active compounds). The proposed correction minimizes the impact of systematic errors which may affect the hit selection in HTS. The introduced method, called a well correction, proceeds by correcting the distribution of measurements within wells of a given HTS assay. We use simulated and experimental data to illustrate the advantages of the new method compared to other widely-used methods of data correction and hit selection in HTS. Contact: makarenkov.vladimir@uqam.ca Supplementary information: Supplementary data are available at Bioinformatics online.
- Published
- 2007
- Full Text
- View/download PDF
23. Improving Detection of Rare Biological Events in High-Throughput Screens
- Author
-
Laurence Lafanechère, Jennifer Button, Caroline Barette, Carl Murie, Robert Nadon, Groupe Plateforme et Moyens Scientifiques et techniques communs / Centre de Criblage pour Molécules Bio-Actives (GPMS / CMBA), Commissariat à l'énergie atomique et aux énergies alternatives (CEA), Genetics and Chemogenomics (GenChem), Laboratoire de Biologie à Grande Échelle (BGE - UMR S1038), Institut National de la Santé et de la Recherche Médicale (INSERM)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut de Recherche Interdisciplinaire de Grenoble (IRIG), Direction de Recherche Fondamentale (CEA) (DRF (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Direction de Recherche Fondamentale (CEA) (DRF (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut de Recherche Interdisciplinaire de Grenoble (IRIG), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA), Institut d'oncologie/développement Albert Bonniot de Grenoble (INSERM U823), Université Joseph Fourier - Grenoble 1 (UJF)-CHU Grenoble-EFS-Institut National de la Santé et de la Recherche Médicale (INSERM), Genome Quebec Innovation Center, Dept Human Genetics, McGill University = Université McGill [Montréal, Canada], and Institut National de la Santé et de la Recherche Médicale (INSERM)-EFS-CHU Grenoble-Université Joseph Fourier - Grenoble 1 (UJF)
- Subjects
Normalization (statistics) ,Quality Control ,Computer science ,[SDV]Life Sciences [q-bio] ,computer.software_genre ,Biochemistry ,Analytical Chemistry ,Small Molecule Libraries ,Automation ,Software ,Drug Discovery ,Humans ,Statistical analysis ,ComputingMilieux_MISCELLANEOUS ,Spatial bias ,Statistical hypothesis testing ,Models, Statistical ,business.industry ,Microsoft excel ,Reproducibility of Results ,Replicate ,High-Throughput Screening Assays ,Molecular Medicine ,Data mining ,business ,computer ,Biotechnology ,HeLa Cells - Abstract
The success of high-throughput screening (HTS) strategies depends on the effectiveness of both normalization methods and study design. We report comparisons among normalization methods in two titration series experiments. We also extend the results in a third experiment with two differently designed but otherwise identical screens: compounds in replicate plates were either placed in the same well locations or were randomly assigned to different locations. Best results were obtained when randomization was combined with normalization methods that corrected for within-plate spatial bias. We conclude that potent, reliable, and accurate HTS requires replication, randomization design strategies, and more extensive normalization than is typically done and that formal statistical testing is desirable. The Statistics and dIagnostic Graphs for HTS (SIGHTS) Microsoft Excel Add-In software is available to conduct most analyses reported here.
- Published
- 2015
- Full Text
- View/download PDF
24. Statistical practice in high-throughput screening data analysis
- Author
-
Jerry Pelletier, Sonia Cerquozzi, Nathalie Malo, Robert Nadon, and James A. Hanley
- Subjects
Biometry ,Process (engineering) ,Computer science ,media_common.quotation_subject ,Drug Evaluation, Preclinical ,Biomedical Engineering ,Guidelines as Topic ,Bioengineering ,Machine learning ,computer.software_genre ,Sensitivity and Specificity ,Applied Microbiology and Biotechnology ,Quality (business) ,Hit selection ,Sensitivity (control systems) ,media_common ,Drug discovery ,business.industry ,Gene Expression Profiling ,Reproducibility of Results ,Replicate ,Microarray Analysis ,Identification (information) ,Data Interpretation, Statistical ,Drug Design ,Molecular Medicine ,Biological Assay ,Artificial intelligence ,Data pre-processing ,business ,computer ,Biotechnology - Abstract
High-throughput screening is an early critical step in drug discovery. Its aim is to screen a large number of diverse chemical compounds to identify candidate 'hits' rapidly and accurately. Few statistical tools are currently available, however, to detect quality hits with a high degree of confidence. We examine statistical aspects of data preprocessing and hit identification for primary screens. We focus on concerns related to positional effects of wells within plates, choice of hit threshold and the importance of minimizing false-positive and false-negative rates. We argue that replicate measurements are needed to verify assumptions of current methods and to suggest data analysis strategies when assumptions are not met. The integration of replicates with robust statistical methods in primary screens will facilitate the discovery of reliable hits, ultimately improving the sensitivity and specificity of the screening process.
- Published
- 2006
- Full Text
- View/download PDF
25. Inferential literacy for experimental high-throughput biology
- Author
-
Robert Nadon and Mathieu Miron
- Subjects
Emerging technologies ,business.industry ,Gene Expression Profiling ,media_common.quotation_subject ,MEDLINE ,Computational Biology ,Reproducibility of Results ,Biology ,Data science ,Literacy ,Software ,Genetics ,Animals ,Humans ,Statistical analysis ,business ,Biological scientists ,High throughput biology ,Oligonucleotide Array Sequence Analysis ,media_common - Abstract
Many biologists believe that data analysis expertise lags behind the capacity for producing high-throughput data. One view within the bioinformatics community is that biological scientists need to develop algorithmic skills to meet the demands of the new technologies. In this article, we argue that the broader concept of inferential literacy, which includes understanding of data characteristics, experimental design and statistical analysis, in addition to computation, more adequately encompasses what is needed for efficient progress in high-throughput biology.
- Published
- 2006
- Full Text
- View/download PDF
26. Large-scale recombination rate patterns are conserved among human populations
- Author
-
Thomas J. Hudson, David Serre, and Robert Nadon
- Subjects
Population ,Recombination rate ,Genomics ,Biology ,Genome ,White People ,Polymorphism (computer science) ,Genetics ,Chromosomes, Human ,Humans ,Letters ,education ,Genetics (clinical) ,Recombination, Genetic ,education.field_of_study ,Polymorphism, Genetic ,Asian ,Genome, Human ,Black or African American ,Genetics, Population ,Evolutionary biology ,Human genome ,Scale (map) ,Recombination - Abstract
In humans, most recombination events occur in a small fraction of the genome. These hotspots of recombination show considerable variation in intensity and/or location across species and, potentially, across human populations. On a larger scale, the patterns of recombination rates have been mostly investigated in individuals of European ancestry, and it remains unknown whether the results obtained can be directly applied to other human populations. Here, we investigate this question using genome-wide polymorphism data. We show that population recombination rates recapitulate a large part of the genetic map information, regardless of the population considered. We also show that the ratio of the population recombination rate estimate of two populations is overall constant along the chromosomes. These two observations support the hypothesis that large-scale recombination patterns are conserved across human populations. Local deviations from the overall pattern of conservation of the recombination rates can be used to select candidate regions with large polymorphic inversions or under local selection.
- Published
- 2005
- Full Text
- View/download PDF
27. The Cognitive Interview: Does It Successfully Avoid the Dangers of Forensic Hypnosis?
- Author
-
Wayne G. Whitehouse, Emily Carota Orne, David F. Dinges, Brad L. Bates, Robert Nadon, and Martin T. Orne
- Subjects
Arts and Humanities (miscellaneous) ,Developmental and Educational Psychology ,Experimental and Cognitive Psychology - Abstract
Seventy-two undergraduates viewed a videotape of a bank robbery that culminated in the shooting of a young boy. Several days later, participants were interviewed about their recollection of events in the film through baseline oral and written narrative accounts followed by random assignment to a hypnosis (HYP) condition, the cognitive interview (CI), or a motivated, repeated recall (MRR) control interview. Participants also completed a forced interrogatory recall test, which indexed potential report criterion differences between the interview conditions. In terms of information provided for the first time during treatment interviews, HYP led to greater productivity than the CI or the MRR interview, which did not differ significantly from each other. Evidence that these differences in recall resulted primarily from report criterion differences rather than differences in accessible memory was obtained from the forced interrogatory recall test. In this test, no differences were observed between the three interview conditions. Finally, the data revealed that participants’ hypnotic ability was associated with the recall of erroneous and confabulatory material for those tested in the HYP and CI conditions but not those in the MRR condition. This suggests that some CI mnemonics may invoke hypnotic-like processes in hypnotizable people.
- Published
- 2005
- Full Text
- View/download PDF
28. Automated Screening of Neurite Outgrowth
- Author
-
Yuriy Alexandrov, Yuriy Cybuch, Peter Ramm, Bohdan J. Soltys, Robert Nadon, and Andrzej Cholewinski
- Subjects
Diagnostic Imaging ,Neurons ,0301 basic medicine ,Neurite ,Total cell ,Biology ,Bioinformatics ,01 natural sciences ,Biochemistry ,0104 chemical sciences ,Analytical Chemistry ,Automation ,Mice ,010404 medicinal & biomolecular chemistry ,03 medical and health sciences ,030104 developmental biology ,Cell bodies ,Neurites ,Animals ,Molecular Medicine ,Image acquisition ,Measurement precision ,Biotechnology ,Biomedical engineering - Abstract
Outgrowth of neurites in culture is used for assessing neurotrophic activity. Neurite measurements have been performed very slowly using manual methods or more efficiently with interactive image analysis systems. In contrast, medium-throughput and noninteractive image analysis of neurite screens has not been well described. The authors report the performance of an automated image acquisition and analysis system (IN Cell Analyzer 1000) in the neurite assay. Neuro-2a (N2a) cells were plated in 96-well plates and were exposed to 6 conditions of retinoic acid. Immunofluorescence labeling of the cytoskeleton was used to detect neurites and cell bodies. Acquisition of the images was automatic. The image set was then analyzed by both manual tracing and automated algorithms. On 5 relevant parameters (number of neurites, neurite length, total cell area, number of cells, neurite length per cell), the authors did not observe a difference between the automated analysis and the manual analysis done by tracing. These data suggest that the automated system addresses the same biology as human scorers and with the same measurement precision for treatment effects. However, throughput of the automated system is orders of magnitude higher than with manual methods.
- Published
- 2003
- Full Text
- View/download PDF
29. Estimation of a residual distribution with small numbers of repeated measurements
- Author
-
Edward Susko and Robert Nadon
- Subjects
Statistics and Probability ,Half-normal distribution ,Rate of convergence ,Characteristic function (probability theory) ,Statistics ,Expectation–maximization algorithm ,Estimator ,Applied mathematics ,Statistics, Probability and Uncertainty ,Upper and lower bounds ,Distribution fitting ,Variance-gamma distribution ,Mathematics - Abstract
The authors consider the estimation of a residual distribution for different measurement problems with a common measurement error process. The problem is motivated by issues arising in the analysis of gene expression data but should have application in other similar settings. It is implicitly assumed through- out that there are large numbers of measurements but small numbers of repeated measurements. As a consequence, the distribution of the estimated residuals is a biased estimate of the residual distribution. The authors present two methods for the estimation of the residual distribution with some restriction on the form of the distribution. They give an upper bound for the rate of convergence for an estimator based on the characteristic function and compare its performance with that of another estimator with simulations.
- Published
- 2002
- Full Text
- View/download PDF
30. Detecting and overcoming systematic bias in high-throughput screening technologies: a comprehensive review of practical issues and methodological solutions
- Author
-
Vladimir Makarenkov, Robert Nadon, Abdulaziz A. Alsuwailem, and Iurie Caraus
- Subjects
Protocol (science) ,DNA, Complementary ,Computer science ,Process (engineering) ,High-Throughput Nucleotide Sequencing ,Reproducibility of Results ,computer.software_genre ,Data science ,Database normalization ,Identification (information) ,Systematic review ,Data quality ,RNA Interference ,Data mining ,Error detection and correction ,Molecular Biology ,Throughput (business) ,computer ,Information Systems - Abstract
Significant efforts have been made recently to improve data throughput and data quality in screening technologies related to drug design. The modern pharmaceutical industry relies heavily on high-throughput screening (HTS) and high-content screening (HCS) technologies, which include small molecule, complementary DNA (cDNA) and RNA interference (RNAi) types of screening. Data generated by these screening technologies are subject to several environmental and procedural systematic biases which introduce errors into the hit identification process. We first review systematic biases typical of HTS and HCS screens. We highlight that study design issues and the way in which data are generated are crucial for providing unbiased screening results. Considering various data sets, including the publicly available ChemBank data, we assess the rates of systematic bias in experimental HTS by using plate-specific and assay-specific error detection tests. We describe main data normalization and correction techniques and introduce a general data pre-processing protocol. This protocol can be recommended for academic and industrial researchers involved in the analysis of current or next generation high-throughput screening data.
- Published
- 2014
31. Control-Plate Regression (CPR) Normalization for High-Throughput Screens with Many Active Features
- Author
-
Laurence Lafanechère, Carl Murie, Robert Nadon, Caroline Barette, Genetics and Chemogenomics (GenChem), Laboratoire de Biologie à Grande Échelle (BGE - UMR S1038), Institut de Recherche Interdisciplinaire de Grenoble (IRIG), Direction de Recherche Fondamentale (CEA) (DRF (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Direction de Recherche Fondamentale (CEA) (DRF (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut de Recherche Interdisciplinaire de Grenoble (IRIG), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019]), Institute for Advanced Biosciences / Institut pour l'Avancée des Biosciences (Grenoble) (IAB), Centre Hospitalier Universitaire [Grenoble] (CHU)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre National de la Recherche Scientifique (CNRS)-Etablissement français du sang - Auvergne-Rhône-Alpes (EFS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019]), Institut National de la Santé et de la Recherche Médicale (INSERM)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut de Recherche Interdisciplinaire de Grenoble (IRIG), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA), Laboratoire Adaptation et pathogénie des micro-organismes [Grenoble] (LAPM), and Université Joseph Fourier - Grenoble 1 (UJF)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
Systematic error ,Normalization (statistics) ,Accuracy and precision ,Anti-HIV Agents ,Chemistry, Pharmaceutical ,[SDV]Life Sciences [q-bio] ,Bacterial Toxins ,Pyruvate Kinase ,Biology ,Bioinformatics ,Biochemistry ,Analytical Chemistry ,Statistical analyses ,Humans ,Computer Simulation ,RNA, Small Interfering ,ComputingMilieux_MISCELLANEOUS ,Antigens, Bacterial ,Models, Statistical ,Dose-Response Relationship, Drug ,business.industry ,Reproducibility of Results ,Pattern recognition ,Cell based assays ,Regression ,High-Throughput Screening Assays ,Microscopy, Fluorescence ,Regression Analysis ,Molecular Medicine ,Biological Assay ,RNA Interference ,Artificial intelligence ,business ,Biotechnology ,Primary screening - Abstract
Systematic error is present in all high-throughput screens, lowering measurement accuracy. Because screening occurs at the early stages of research projects, measurement inaccuracy leads to following up inactive features and failing to follow up active features. Current normalization methods take advantage of the fact that most primary-screen features (e.g., compounds) within each plate are inactive, which permits robust estimates of row and column systematic-error effects. Screens that contain a majority of potentially active features pose a more difficult challenge because even the most robust normalization methods will remove at least some of the biological signal. Control plates that contain the same feature in all wells can provide a solution to this problem by providing well-by-well estimates of systematic error, which can then be removed from the treatment plates. We introduce the robust control-plate regression (CPR) method, which uses this approach. CPR's performance is compared to a high-performing primary-screen normalization method in four experiments. These data were also perturbed to simulate screens with large numbers of active features to further assess CPR's performance. CPR performs almost as well as the best performing normalization methods with primary screens and outperforms the Z-score and equivalent methods with screens containing a large proportion of active features.
- Published
- 2014
- Full Text
- View/download PDF
32. What this Field Needs is a Good Nomological Network
- Author
-
Robert Nadon
- Subjects
Complementary and Manual Therapy ,Structure (mathematical logic) ,Personality Inventory ,Psychometrics ,Field (Bourdieu) ,Individuality ,Suggestibility ,Reproducibility of Results ,Construct validity ,Nomological network ,Clinical Psychology ,Humans ,Engineering ethics ,Suggestion ,Psychology ,Social psychology ,Hypnosis - Abstract
Research in the field of hypnosis lacks a coherent structure on which to build. This lack of a mature nornological network stems from fundamental disagreements concerning the construct validity of hypnotizability, which in turn stem in part from different research practices across laboratories. For these reasons, the field has had less impact on psychology and medicine than is warranted by the numerous sophisticated scientific studies that have been conducted during the past three decades.
- Published
- 1997
- Full Text
- View/download PDF
33. Re-analysis of genome wide data on mammalian microRNA-mediated suppression of gene expression
- Author
-
Ola Larsson and Robert Nadon
- Subjects
ribosome profiling ,Gene knockdown ,Messenger RNA ,mRNA translation ,microRNA ,Nonsense-mediated decay ,Cell Biology ,bioinformatics ,Biology ,Biochemistry ,Molecular biology ,Genome ,statistics ,P-bodies ,Gene expression ,Ribosome profiling ,mRNA stability ,Molecular Biology ,Developmental Biology ,Research Paper - Abstract
microRNAs are short endogenously expressed RNAs that regulate gene expression post-transcriptionally. Although both mRNA degradation and suppression of mRNA translation can mediate reduced protein levels following microRNA targeting of an mRNA, their relative contributions have remained elusive. A recent genome-wide study in mammals employing RNA-sequencing to measure microRNA effects on mRNA translation and stability concluded that 84–89% of microRNA-induced suppression of gene expression is due to degradation of target mRNAs. We re-analyzed this data set and applied a number of analysis modifications which revealed that the contribution of mRNA translation was likely underestimated for some mRNA subsets. Moreover, in contrast to the original analysis, our analysis indicated that suppression of mRNA translation precedes mRNA degradation upon microRNA targeting. Our findings thereby enhance our understanding of microRNA mediated genome wide suppression of gene expression in mammals.
- Published
- 2013
34. Acacetin and Chrysin, Two Polyphenolic Compounds, Alleviate Telomeric Position Effect in Human Cells
- Author
-
Frédérique Magdinier, Laurence Lafanechère, Robert Nadon, Arva Firozhoussen, Eric Gilson, Yiming Lu, Natacha Broucqsault, Amina Boussouar, Adelaïde Saint-Léger, Jing Ye, Caroline Barette, Alexandre Ottaviani, Laboratoire de Biologie Moléculaire de la Cellule (LBMC), Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Lyon (ENS Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon, Groupe Plateforme et Moyens Scientifiques et techniques communs / Centre de Criblage pour Molécules Bio-Actives (GPMS / CMBA), Commissariat à l'énergie atomique et aux énergies alternatives (CEA), Genome Quebec Innovation Center, Dept Human Genetics, McGill University = Université McGill [Montréal, Canada], Génétique Médicale et Génomique Fonctionnelle (GMGF), Aix Marseille Université (AMU)-Assistance Publique - Hôpitaux de Marseille (APHM)- Hôpital de la Timone [CHU - APHM] (TIMONE)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre National de la Recherche Scientifique (CNRS), Institut d'oncologie/développement Albert Bonniot de Grenoble (INSERM U823), Université Joseph Fourier - Grenoble 1 (UJF)-CHU Grenoble-EFS-Institut National de la Santé et de la Recherche Médicale (INSERM), Géosciences Environnement Toulouse (GET), Institut national des sciences de l'Univers (INSU - CNRS)-Centre National de la Recherche Scientifique (CNRS)-Institut de Recherche pour le Développement (IRD)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Observatoire Midi-Pyrénées (OMP), Météo France-Centre National d'Études Spatiales [Toulouse] (CNES)-Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut de Recherche pour le Développement (IRD)-Météo France-Centre National d'Études Spatiales [Toulouse] (CNES)-Centre National de la Recherche Scientifique (CNRS)-Institut de Recherche pour le Développement (IRD), IFR 128 : microscopy (PLATIM) and Flow Cytometry and IRCAN (microscopy (PICMI) and cytometry (CYTOMED))Association Française contre les Myopathies (AFM)Fondation pour la Recherche Médicale (FRM)Ligue Nationale contre le Cancer National Natural Science Foundation of China (Grant Numbers: 81000875, 81270433), École normale supérieure de Lyon (ENS de Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre National de la Recherche Scientifique (CNRS), Genetics and Chemogenomics (GenChem), Laboratoire de Biologie à Grande Échelle (BGE - UMR S1038), Institut National de la Santé et de la Recherche Médicale (INSERM)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut de Recherche Interdisciplinaire de Grenoble (IRIG), Direction de Recherche Fondamentale (CEA) (DRF (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Direction de Recherche Fondamentale (CEA) (DRF (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut de Recherche Interdisciplinaire de Grenoble (IRIG), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA), Institut de Recherche pour le Développement (IRD)-Université Toulouse III - Paul Sabatier (UT3), Université de Toulouse (UT)-Université de Toulouse (UT)-Institut national des sciences de l'Univers (INSU - CNRS)-Observatoire Midi-Pyrénées (OMP), Université de Toulouse (UT)-Université de Toulouse (UT)-Institut national des sciences de l'Univers (INSU - CNRS)-Centre National d'Études Spatiales [Toulouse] (CNES)-Centre National de la Recherche Scientifique (CNRS)-Météo-France -Institut de Recherche pour le Développement (IRD)-Institut national des sciences de l'Univers (INSU - CNRS)-Centre National d'Études Spatiales [Toulouse] (CNES)-Centre National de la Recherche Scientifique (CNRS)-Météo-France -Centre National de la Recherche Scientifique (CNRS), Roche, Stephane, École normale supérieure - Lyon (ENS Lyon)-Université Claude Bernard Lyon 1 (UCBL), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Institut national des sciences de l'Univers (INSU - CNRS)-Observatoire Midi-Pyrénées (OMP), and Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Institut national des sciences de l'Univers (INSU - CNRS)-Centre National d'Études Spatiales [Toulouse] (CNES)-Centre National de la Recherche Scientifique (CNRS)-Météo-France -Institut de Recherche pour le Développement (IRD)-Institut national des sciences de l'Univers (INSU - CNRS)-Centre National d'Études Spatiales [Toulouse] (CNES)-Centre National de la Recherche Scientifique (CNRS)-Météo-France -Centre National de la Recherche Scientifique (CNRS)
- Subjects
telomere-induced foci ,[SDV.BIO]Life Sciences [q-bio]/Biotechnology ,DNA damage ,[SDV.NEU.NB]Life Sciences [q-bio]/Neurons and Cognition [q-bio.NC]/Neurobiology ,[SDV.BC.BC]Life Sciences [q-bio]/Cellular Biology/Subcellular Processes [q-bio.SC] ,[SDV.GEN.GH] Life Sciences [q-bio]/Genetics/Human genetics ,Biology ,[SDV.BBM.BM] Life Sciences [q-bio]/Biochemistry, Molecular Biology/Molecular biology ,DNA damage response ,Flavones ,03 medical and health sciences ,chemistry.chemical_compound ,0302 clinical medicine ,[SDV.MHEP.CSC]Life Sciences [q-bio]/Human health and pathology/Cardiology and cardiovascular system ,[SDV.BBM.GTP]Life Sciences [q-bio]/Biochemistry, Molecular Biology/Genomics [q-bio.GN] ,Drug Discovery ,[SDV.BC.BC] Life Sciences [q-bio]/Cellular Biology/Subcellular Processes [q-bio.SC] ,Chrysin ,telomeric position effect ,[SDV.BBM.BC]Life Sciences [q-bio]/Biochemistry, Molecular Biology/Biochemistry [q-bio.BM] ,[SDV.BBM.BC] Life Sciences [q-bio]/Biochemistry, Molecular Biology/Biochemistry [q-bio.BM] ,Gene ,030304 developmental biology ,flavanoid ,chemistry.chemical_classification ,Genetics ,telomere ,0303 health sciences ,Acacetin ,lcsh:RM1-950 ,[SDV.NEU.NB] Life Sciences [q-bio]/Neurons and Cognition [q-bio.NC]/Neurobiology ,[SDV.BBM.BM]Life Sciences [q-bio]/Biochemistry, Molecular Biology/Molecular biology ,Shelterin ,[SDV.BIO] Life Sciences [q-bio]/Biotechnology ,[SDV.MHEP.CSC] Life Sciences [q-bio]/Human health and pathology/Cardiology and cardiovascular system ,3. Good health ,Telomere ,Cell biology ,polyphenol ,lcsh:Therapeutics. Pharmacology ,Position effect ,chemistry ,[SDV.GEN.GH]Life Sciences [q-bio]/Genetics/Human genetics ,030220 oncology & carcinogenesis ,[SDV.BBM.GTP] Life Sciences [q-bio]/Biochemistry, Molecular Biology/Genomics [q-bio.GN] ,Molecular Medicine ,Original Article - Abstract
International audience; We took advantage of the ability of human telomeres to silence neighboring genes (telomere position effect or TPE) to design a high-throughput screening assay for drugs altering telomeres. We identified, for the first time, that two dietary flavones, acacetin and chrysin, are able to specifically alleviate TPE in human cells. We further investigated their influence on telomere integrity and showed that both drugs drastically deprotect telomeres against DNA damage response. However, telomere deprotection triggered by shelterin dysfunction does not affect TPE, indicating that acacetin and chrysin target several functions of telomeres. These results show that TPE-based screening assays represent valuable methods to discover new compounds targeting telomeres.Molecular Therapy-Nucleic Acids (2013) 2, e116; doi:10.1038/mtna.2013.42; published online 20 August 2013.
- Published
- 2013
- Full Text
- View/download PDF
35. A New Effective Method for Elimination of Systematic Error in Experimental High-Throughput Screening
- Author
-
Robert Nadon, Vladimir Makarenkov, and Plamen Dragiev
- Subjects
Overdetermined system ,Systematic error ,Noise ,Computer science ,Effective method ,Context (language use) ,Exact location ,Median absolute deviation ,Error detection and correction ,Algorithm - Abstract
High-throughput screening (HTS) is a critical step of the drug discovery process. It involves measuring the activity levels of thousands of chemical compounds. Several technical and environmental factors can affect an experimental HTS campaign and thus cause systematic deviations from correct results. A number of error correction methods have been designed to address this issue in the context of experimental HTS (Brideau et al., J Biomol Screen 8:634–647, 2003; Kevorkov and Makarenkov, J Biomol Screen 10:557–567, 2005; Makarenkov et al., Bioinformatics 23:1648–1657, 2007; Malo et al., Nat Biotechnol 24:167–175, 2006). Despite their power to reduce the impact of systematic noise, all these methods introduce a bias when applied to data not containing any systematic error. We will present a new method, proceeding by finding an approximate solution of an overdetermined system of linear equations, for eliminating systematic error from HTS screens by using a prior knowledge on its exact location. This is an important improvement over the popular B-score method designed by Merck Frosst researchers (Brideau et al., J Biomol Screen 8:634–647, 2003) and widely used in the modern HTS.
- Published
- 2013
- Full Text
- View/download PDF
36. Intensity quantile estimation and mapping—a novel algorithm for the correction of image non-uniformity bias in HCS data
- Author
-
Ernest Lo, Robert Nadon, Laurence Lafanechère, Anne Martinez, Emmanuelle Soleilhac, CEA Grenoble/DSV, Commissariat à l'énergie atomique et aux énergies alternatives (CEA), Institut d'oncologie/développement Albert Bonniot de Grenoble (INSERM U823), Institut National de la Santé et de la Recherche Médicale (INSERM)-EFS-CHU Grenoble-Université Joseph Fourier - Grenoble 1 (UJF), Genome Quebec Innovation Center, Dept Human Genetics, McGill University = Université McGill [Montréal, Canada], and Université Joseph Fourier - Grenoble 1 (UJF)-CHU Grenoble-EFS-Institut National de la Santé et de la Recherche Médicale (INSERM)
- Subjects
Statistics and Probability ,Computer science ,Image processing ,Biochemistry ,Microtubules ,03 medical and health sciences ,0302 clinical medicine ,Data acquisition ,Image Processing, Computer-Assisted ,Humans ,Molecular Biology ,030304 developmental biology ,0303 health sciences ,Pixel ,Function (mathematics) ,Quantile function ,Computer Science Applications ,Computational Mathematics ,Computational Theory and Mathematics ,[INFO.INFO-TI]Computer Science [cs]/Image Processing [eess.IV] ,[INFO.INFO-BI]Computer Science [cs]/Bioinformatics [q-bio.QM] ,Algorithm ,030217 neurology & neurosurgery ,Algorithms ,Quantile ,HeLa Cells - Abstract
Motivation: Image non-uniformity (NU) refers to systematic, slowly varying spatial gradients in images that result in a bias that can affect all downstream image processing, quantification and statistical analysis steps. Image NU is poorly modeled in the field of high-content screening (HCS), however, such that current conventional correction algorithms may be either inappropriate for HCS or fail to take advantage of the information available in HCS image data. Results: A novel image NU bias correction algorithm, termed intensity quantile estimation and mapping (IQEM), is described. The algorithm estimates the full non-linear form of the image NU bias by mapping pixel intensities to a reference intensity quantile function. IQEM accounts for the variation in NU bias over broad cell intensity ranges and data acquisition times, both of which are characteristic of HCS image datasets. Validation of the method, using simulated and HCS microtubule polymerization screen images, is presented. Two requirements of IQEM are that the dataset consists of large numbers of images acquired under identical conditions and that cells are distributed with no within-image spatial preference. Availability and implementation: MATLAB function files are available at http://nadon-mugqic.mcgill.ca/. Contact: robert.nadon@mcgill.ca Supplementary Information: Supplementary data are available at Bioinformatics online.
- Published
- 2012
- Full Text
- View/download PDF
37. High-content screening for the discovery of pharmacological compounds: advantages, challenges and potential benefits of recent technological developments
- Author
-
Laurence Lafanechère, Robert Nadon, Emmanuelle Soleilhac, Groupe Plateforme et Moyens Scientifiques et techniques communs / Centre de Criblage pour Molécules Bio-Actives (GPMS / CMBA), Commissariat à l'énergie atomique et aux énergies alternatives (CEA), Genome Quebec Innovation Center, Dept Human Genetics, McGill University, Laboratoire de physiologie cellulaire végétale (LPCV), Université Joseph Fourier - Grenoble 1 (UJF)-Institut National de la Recherche Agronomique (INRA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Centre National de la Recherche Scientifique (CNRS), CEA Grenoble/DSV, McGill University = Université McGill [Montréal, Canada], Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Université Joseph Fourier - Grenoble 1 (UJF)-Centre National de la Recherche Scientifique (CNRS)-Institut National de la Recherche Agronomique (INRA), Université Joseph Fourier - Grenoble 1 (UJF)-Institut National de la Recherche Agronomique (INRA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Centre National de la Recherche Scientifique (CNRS)-Institut de Recherche Interdisciplinaire de Grenoble (IRIG), Direction de Recherche Fondamentale (CEA) (DRF (CEA)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Direction de Recherche Fondamentale (CEA) (DRF (CEA)), and Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)
- Subjects
[SDV.BIO]Life Sciences [q-bio]/Biotechnology ,Computer science ,[SDV.IB.IMA]Life Sciences [q-bio]/Bioengineering/Imaging ,[SDV]Life Sciences [q-bio] ,spheroids ,[SDV.BC]Life Sciences [q-bio]/Cellular Biology ,high-content screening ,01 natural sciences ,Field (computer science) ,03 medical and health sciences ,Basic research ,Drug Discovery ,single-cell analysis ,ComputingMilieux_MISCELLANEOUS ,030304 developmental biology ,0303 health sciences ,010405 organic chemistry ,Drug discovery ,high-content analysis ,[SDV.SP]Life Sciences [q-bio]/Pharmaceutical sciences ,Data science ,3. Good health ,0104 chemical sciences ,High-content screening ,embryonic structures ,data reduction - Abstract
International audience; Importance of the field: Screening compounds with cell-based assays and microscopy image-based analysis is an approach currently favored for drug discovery. Because of its high information yield, the strategy is called high-content screening (HCS). Areas covered in this review: This review covers the application of HCS in drug discovery and also in basic research of potential new pathways that can be targeted for treatment of pathophysiological diseases. HCS faces several challenges, however, including the extraction of pertinent information from the massive amount of data generated from images. Several proposed approaches to HCS data acquisition and analysis are reviewed. What the reader will gain: Different solutions from the fields of mathematics, bioinformatics and biotechnology are presented. Potential applications and limits of these recent technical developments are also discussed. Take home message: HCS is a multidisciplinary and multistep approach for understanding the effects of compounds on biological processes at the cellular level. Reliable results depend on the quality of the overall process and require strong interdisciplinary collaborations.
- Published
- 2012
- Full Text
- View/download PDF
38. GENE EXPRESSION – TIME TO CHANGE POINT OF VIEW?
- Author
-
Ola Larsson and Robert Nadon
- Published
- 2012
- Full Text
- View/download PDF
39. anota: Analysis of differential translation in genome-wide studies
- Author
-
Ola Larsson, Robert Nadon, and Nahum Sonenberg
- Subjects
Statistics and Probability ,Genetics ,Regulation of gene expression ,Genome, Human ,Translation (biology) ,Biology ,Biochemistry ,Genome ,Genetic translation ,Computer Science Applications ,Bioconductor ,Computational Mathematics ,Computational Theory and Mathematics ,Gene Expression Regulation ,Polysome ,Protein Biosynthesis ,Gene expression ,Protein biosynthesis ,Humans ,RNA, Messenger ,Molecular Biology ,Ribosomes ,Software ,Genome-Wide Association Study - Abstract
Summary: Translational control of gene expression has emerged as a major mechanism that regulates many biological processes and shows dysregulation in human diseases including cancer. When studying differential translation, levels of both actively translating mRNAs and total cytosolic mRNAs are obtained where the latter is used to correct for a possible contribution of differential cytosolic mRNA levels to the observed differential levels of actively translated mRNAs. We have recently shown that analysis of partial variance (APV) corrects for cytosolic mRNA levels more effectively than the commonly applied log ratio approach. APV provides a high degree of specificity and sensitivity for detecting biologically meaningful translation changes, especially when combined with a variance shrinkage method for estimating random error. Here we describe the anota (analysis of translational activity) R-package which implements APV, allows scrutiny of associated statistical assumptions and provides biologically motivated filters for analysis of genome wide datasets. Although the package was developed for analysis of differential translation in polysome microarray or ribosome-profiling datasets, any high-dimensional data that result in paired controls, such as RNP immunoprecipitation-microarray (RIP-CHIP) datasets, can be successfully analyzed with anota. Availability: The anota Bioconductor package, www.bioconductor.org. Contact: ola.larsson@ki.se; robert.nadon@mcgill.ca
- Published
- 2011
40. Experimental design and statistical methods for improved hit detection in high-throughput screening
- Author
-
David Y. Thomas, Jerry Pelletier, Nathalie Malo, Graeme W. Carlile, James A. Hanley, Jing Liu, and Robert Nadon
- Subjects
Computer science ,Robust statistics ,Drug Evaluation, Preclinical ,Fluorescent Antibody Technique ,Biochemistry ,Analytical Chemistry ,Random Allocation ,Luciferases, Firefly ,Replication (statistics) ,Statistical inference ,Animals ,Computer Simulation ,False Positive Reactions ,Statistical hypothesis testing ,Luciferases, Renilla ,Protein Synthesis Inhibitors ,Models, Statistical ,Receiver operating characteristic ,Cell-Free System ,Statistical model ,Replicate ,High-Throughput Screening Assays ,ROC Curve ,Research Design ,Protein Biosynthesis ,Benchmark (computing) ,Molecular Medicine ,Algorithm ,Biotechnology - Abstract
Identification of active compounds in high-throughput screening (HTS) contexts can be substantially improved by applying classical experimental design and statistical inference principles to all phases of HTS studies. The authors present both experimental and simulated data to illustrate how true-positive rates can be maximized without increasing false-positive rates by the following analytical process. First, the use of robust data preprocessing methods reduces unwanted variation by removing row, column, and plate biases. Second, replicate measurements allow estimation of the magnitude of the remaining random error and the use of formal statistical models to benchmark putative hits relative to what is expected by chance. Receiver Operating Characteristic (ROC) analyses revealed superior power for data preprocessed by a trimmed-mean polish method combined with the RVM t-test, particularly for small- to moderate-sized biological hits.
- Published
- 2010
41. Accurate samples for testing mass spectrometry based peptide quantification algorithms
- Author
-
Robert E. Kearney, Brian Carrillo, Robert Nadon, and Sylvie Laboissiere
- Subjects
chemistry.chemical_classification ,Isotope ,Mass spectrometry based proteomics ,Computer science ,Calibration (statistics) ,Molecular biophysics ,Reproducibility of Results ,Peptide ,Proteomics ,Mass spectrometry ,Mass Spectrometry ,Set (abstract data type) ,Noise ,chemistry ,Isotope Labeling ,Databases, Protein ,Peptides ,Algorithm ,Algorithms - Abstract
Quantitative proteomic experiments use algorithms to estimate peptide abundances from spectra. The efficacy of these algorithms is usually tested against a contrived mixture of proteins. However, the numerous error sources in mass spectrometry based proteomics experiments must be accounted for to evaluate novel algorithms in an unbiased manner. We set out to examine how to best utilize a set of calibration data for this purpose. We demonstrated that calibration data will have substantial noise whose magnitude depends on whether comparisons are made within or across experiments. We then propose a novel method of testing algorithms that uses the natural isotopic envelope of peptides to minimize measurement noise. We show that the variability of isotopic peptide ratios is an order of magnitude lower with this approach than with typical standard protein mixtures. We conclude by demonstrating the usefulness of this new technique in the analysis of typical peak picking algorithms.
- Published
- 2010
- Full Text
- View/download PDF
42. Systematic error detection in experimental high-throughput screening
- Author
-
Vladimir Makarenkov, Plamen Dragiev, and Robert Nadon
- Subjects
Systematic error ,Computer science ,Context (language use) ,lcsh:Computer applications to medicine. Medical informatics ,computer.software_genre ,Bioinformatics ,01 natural sciences ,Biochemistry ,Discrete Fourier transform ,03 medical and health sciences ,Software ,Structural Biology ,Drug Discovery ,lcsh:QH301-705.5 ,Molecular Biology ,030304 developmental biology ,Statistical hypothesis testing ,0303 health sciences ,business.industry ,Applied Mathematics ,Methodology Article ,0104 chemical sciences ,Computer Science Applications ,High-Throughput Screening Assays ,010404 medicinal & biomolecular chemistry ,lcsh:Biology (General) ,Data Interpretation, Statistical ,lcsh:R858-859.7 ,Data mining ,business ,Error detection and correction ,computer - Abstract
Background High-throughput screening (HTS) is a key part of the drug discovery process during which thousands of chemical compounds are screened and their activity levels measured in order to identify potential drug candidates (i.e., hits). Many technical, procedural or environmental factors can cause systematic measurement error or inequalities in the conditions in which the measurements are taken. Such systematic error has the potential to critically affect the hit selection process. Several error correction methods and software have been developed to address this issue in the context of experimental HTS [1–7]. Despite their power to reduce the impact of systematic error when applied to error perturbed datasets, those methods also have one disadvantage - they introduce a bias when applied to data not containing any systematic error [6]. Hence, we need first to assess the presence of systematic error in a given HTS assay and then carry out systematic error correction method if and only if the presence of systematic error has been confirmed by statistical tests. Results We tested three statistical procedures to assess the presence of systematic error in experimental HTS data, including the χ2 goodness-of-fit test, Student's t-test and Kolmogorov-Smirnov test [8] preceded by the Discrete Fourier Transform (DFT) method [9]. We applied these procedures to raw HTS measurements, first, and to estimated hit distribution surfaces, second. The three competing tests were applied to analyse simulated datasets containing different types of systematic error, and to a real HTS dataset. Their accuracy was compared under various error conditions. Conclusions A successful assessment of the presence of systematic error in experimental HTS assays is possible when the appropriate statistical methodology is used. Namely, the t-test should be carried out by researchers to determine whether systematic error is present in their HTS data prior to applying any error correction method. This important step can significantly improve the quality of selected hits.
- Published
- 2010
43. Laterality of Hypnotic Response
- Author
-
Irene P. Hoyt, Laura L. Otto-salaj, John F. Kihlstrom, Robert Nadon, and Patricia A. Register
- Subjects
Male ,Complementary and Manual Therapy ,Hypnosis ,medicine.medical_specialty ,medicine.drug_class ,Neuropsychology ,Audiology ,Functional Laterality ,Developmental psychology ,Hypnotic ,Clinical Psychology ,Cerebral activity ,Laterality ,medicine ,Humans ,Female ,Suggestion ,Psychology - Abstract
In an investigation of hemispheric activity during hypnosis, a total of 1269 Ss received hypnotizability scales containing suggestions targeting the left or right side of the body. There were no consistent differences in response strength on the left compared to the right side. Nor were there differences in hypnotizability between right-and left-handed (and ambidextrous) Ss, or between Ss who sat on the left versus right side of the testing room. Definitive evidence of lateralized cerebral activity associated with hypnosis and hypnotizability can only come from direct neuropsychological, electrocortical, or brain-imaging investigations.
- Published
- 1992
- Full Text
- View/download PDF
44. Methods for combining peptide intensities to estimate relative protein abundance
- Author
-
C.M. Yanofsky, Sylvie Laboissiere, Robert E. Kearney, Robert Nadon, and Brian Carrillo
- Subjects
Statistics and Probability ,Molecular Sequence Data ,Peptide ,Mass spectrometry ,Biochemistry ,Noise (electronics) ,Peptide Mapping ,Sensitivity and Specificity ,Abundance (ecology) ,Amino Acid Sequence ,Molecular Biology ,Mathematics ,chemistry.chemical_classification ,Observational error ,Detector ,Proteins ,Computer Science Applications ,Intensity (physics) ,Computational Mathematics ,Computational Theory and Mathematics ,chemistry ,Isotope Labeling ,Protein abundance ,Biological system ,Algorithms - Abstract
Motivation: Labeling techniques are being used increasingly to estimate relative protein abundances in quantitative proteomic studies. These techniques require the accurate measurement of correspondingly labeled peptide peak intensities to produce high-quality estimates of differential expression ratios. In mass spectrometers with counting detectors, the measurement noise varies with intensity and consequently accuracy increases with the number of ions detected. Consequently, the relative variability of peptide intensity measurements varies with intensity. This effect must be accounted for when combining information from multiple peptides to estimate relative protein abundance. Results: We examined a variety of algorithms that estimate protein differential expression ratios from multiple peptide intensity measurements. Algorithms that account for the variation of measurement error with intensity were found to provide the most accurate estimates of differential abundance. A simple Sum-of-Intensities algorithm provided the best estimates of true protein ratios of all algorithms tested. Contact: robert.kearney@mcgill.ca Supplementary information: Supplementary data are available at Bioinformatics online.
- Published
- 2009
45. Comparison of small n statistical tests of differential expression applied to microarrays
- Author
-
Anna Y. Lee, Owen Z Woody, Carl Murie, and Robert Nadon
- Subjects
Normalization (statistics) ,DNA, Complementary ,Biology ,lcsh:Computer applications to medicine. Medical informatics ,computer.software_genre ,Biochemistry ,Bayes' theorem ,Structural Biology ,Statistics ,lcsh:QH301-705.5 ,Molecular Biology ,Oligonucleotide Array Sequence Analysis ,Statistical hypothesis testing ,Models, Statistical ,Gene Expression Profiling ,Methodology Article ,Applied Mathematics ,Computational Biology ,Experimental data ,Computer Science Applications ,Gene expression profiling ,lcsh:Biology (General) ,Gene chip analysis ,lcsh:R858-859.7 ,False positive rate ,Data mining ,DNA microarray ,computer ,Algorithms - Abstract
Background DNA microarrays provide data for genome wide patterns of expression between observation classes. Microarray studies often have small samples sizes, however, due to cost constraints or specimen availability. This can lead to poor random error estimates and inaccurate statistical tests of differential expression. We compare the performance of the standard t-test, fold change, and four small n statistical test methods designed to circumvent these problems. We report results of various normalization methods for empirical microarray data and of various random error models for simulated data. Results Three Empirical Bayes methods (CyberT, BRB, and limma t-statistics) were the most effective statistical tests across simulated and both 2-colour cDNA and Affymetrix experimental data. The CyberT regularized t-statistic in particular was able to maintain expected false positive rates with simulated data showing high variances at low gene intensities, although at the cost of low true positive rates. The Local Pooled Error (LPE) test introduced a bias that lowered false positive rates below theoretically expected values and had lower power relative to the top performers. The standard two-sample t-test and fold change were also found to be sub-optimal for detecting differentially expressed genes. The generalized log transformation was shown to be beneficial in improving results with certain data sets, in particular high variance cDNA data. Conclusion Pre-processing of data influences performance and the proper combination of pre-processing and statistical testing is necessary for obtaining the best results. All three Empirical Bayes methods assessed in our study are good choices for statistical tests for small n microarray studies for both Affymetrix and cDNA data. Choice of method for a particular study will depend on software and normalization preferences.
- Published
- 2009
- Full Text
- View/download PDF
46. Absorption and hypnotizability: Context effects reexamined
- Author
-
Irene P. Hoyt, Robert Nadon, Patricia A. Register, and John F. Kihlstrom
- Subjects
Sociology and Political Science ,Social Psychology ,Psychometrics ,Suggestibility ,Construct validity ,Hypnotic susceptibility ,Context (language use) ,Test validity ,Personality test ,Absorption (psychology) ,Psychology ,Social psychology ,Developmental psychology - Abstract
Two independent studies failed to find evidence consistent with Council, Kirsch, and Hafner (1986), who argued that the repeatedly observed correlations between Tellegen's (1981) Absorption Scale (TAS) and hypnosis measures were artifacts of testing context, and de Groot, Gwynn, and Spanos (1988), who claimed evidence for a Gender x Context moderator effect. In the present studies, Ss completed the TAS and other personality questionnaires on 2 occasions: during an independent survey and later immediately prior to an assessment of hypnotizability. In Experiment 1 (N = 475), the effect of context on the relation between questionnaire scores and hypnotizability was weak and variable; in Experiment 2 (N = 434), these weak effects were reversed. The results reaffirm the construct validity of absorption as both a major dimension of personality and as a predictor of hypnotic responsiveness.
- Published
- 1991
- Full Text
- View/download PDF
47. mTORC1 promotes survival through translational control of Mcl-1
- Author
-
Scott C. Kogan, Francis Robert, Chen-Ju Lin, John R. Mills, Hans-Guido Wendel, Robert Nadon, Al Charest, David E. Housman, Abba Malina, Scott W. Lowe, Yoshitaka Hippo, Ulrike Trojahn, Jerry Pelletier, Samuel M. H. Chen, and Roderick T. Bronson
- Subjects
Lymphoma ,Immunoblotting ,mTORC1 ,Biology ,Mechanistic Target of Rapamycin Complex 1 ,medicine.disease_cause ,Mice ,Tuberous Sclerosis Complex 2 Protein ,medicine ,Animals ,Immunoprecipitation ,Protein kinase B ,PI3K/AKT/mTOR pathway ,Sirolimus ,Multidisciplinary ,Akt/PKB signaling pathway ,Reverse Transcriptase Polymerase Chain Reaction ,TOR Serine-Threonine Kinases ,Tumor Suppressor Proteins ,Proteins ,Biological Sciences ,Myeloid Cell Leukemia Sequence 1 Protein ,Gene Expression Regulation, Neoplastic ,Proto-Oncogene Proteins c-bcl-2 ,Multiprotein Complexes ,Cancer research ,Signal transduction ,biological phenomena, cell phenomena, and immunity ,Carcinogenesis ,Signal Transduction ,Transcription Factors - Abstract
Activation of the phosphatidylinositol 3-kinase (PI3K)/AKT signaling pathway is a frequent occurrence in human cancers and a major promoter of chemotherapeutic resistance. Inhibition of one downstream target in this pathway, mTORC1, has shown potential to improve chemosensitivity. However, the mechanisms and genetic modifications that confer sensitivity to mTORC1 inhibitors remain unclear. Here, we demonstrate that loss of TSC2 in the E μ- myc murine lymphoma model leads to mTORC1 activation and accelerated oncogenesis caused by a defective apoptotic program despite compromised AKT phosphorylation. Tumors from Tsc2 +/− E μ- Myc mice underwent rapid apoptosis upon blockade of mTORC1 by rapamycin. We identified myeloid cell leukemia sequence 1 ( Mcl-1 ), a bcl-2 like family member, as a translationally regulated genetic determinant of mTORC1-dependent survival. Our results indicate that the extent by which rapamycin can modulate expression of Mcl-1 is an important feature of the rapamycin response.
- Published
- 2008
48. Locally Linear Regression and the Calibration Problem for Micro-Array Analysis
- Author
-
Carl Murie, Isadora Antoniano Villalobos, Antonio Ciampi, Alina Dyachenko, Benjamin Rich, and Robert Nadon
- Subjects
Polynomial regression ,General linear model ,Statistics::Theory ,Proper linear model ,LOCALLY LINEAR MULTIPLE REGRESSION ,Computer science ,Regression analysis ,computer.software_genre ,AFFIMATRIX MICROARRAY ,Statistics::Computation ,BAYESIAN REGRESSION TREES ,Linear predictor function ,Bayesian multivariate linear regression ,Linear regression ,MICROARRAY CALIBRATION ,Statistics::Methodology ,Principal component regression ,Data mining ,BAYESIAN REGRESSION TREES, MICROARRAY CALIBRATION, AFFIMATRIX MICROARRAY, LOCALLY LINEAR MULTIPLE REGRESSION ,computer - Abstract
We review the concept of locally linear regression and its relationship to Diday’s Nuees Dynamiques and to tree-structured linear regression. We describe the calibration problem in microarray analysis and propose a Bayesian approach based on tree-structured linear regression. Using the proposed approach, we analyze a subset of a large data set from an Affymetrix microarray calibration experiment. In this example, a tree-structured regression model outperforms a multiple regression model. We calculated 95% Credible Intervals for a sample of the data, obtaining reasonably good results. Future research will consider and compare several other approaches to locally linear regression.
- Published
- 2007
- Full Text
- View/download PDF
49. The Shivplot: a graphical display for trend elucidation and exploratory analysis of microarray data
- Author
-
Owen Z Woody and Robert Nadon
- Subjects
Information Systems and Management ,Computer science ,Methodology ,Experimental data ,Health Informatics ,Density estimation ,computer.software_genre ,lcsh:Computer applications to medicine. Medical informatics ,Computer Science Applications ,Data set ,Exploratory data analysis ,Redundancy (engineering) ,Graph (abstract data type) ,lcsh:R858-859.7 ,Data mining ,Graphics ,computer ,Information Systems ,Interpretability - Abstract
Background High-throughput systems are powerful tools for the life science research community. The complexity and volume of data from these systems, however, demand special treatment. Graphical tools are needed to evaluate many aspects of the data throughout the analysis process because plots can provide quality assessments for thousands of values simultaneously. The utility of a plot, in turn, is contingent on both its interpretability and its efficiency. Results The shivplot, a graphical technique motivated by microarrays but applicable to any replicated high-throughput data set, is described. The plot capitalizes on the strengths of three well-established plotting graphics – a boxplot, a distribution density plot, and a variability vs intensity plot – by effectively combining them into a single representation. Conclusion The utility of the new display is illustrated with microarray data sets. The proposed graph, retaining all the information of its precursors, conserves space and minimizes redundancy, but also highlights features of the data that would be difficult to appreciate from the individual display components. We recommend the use of the shivplot both for exploratory data analysis and for the communication of experimental data in publications.
- Published
- 2006
50. A methodology for global validation of microarray experiments
- Author
-
Robert Sladek, Owen Z Woody, Mathieu Miron, Alexandre Marcil, Carl Murie, and Robert Nadon
- Subjects
Microarray ,Correlation coefficient ,Computational biology ,Biology ,lcsh:Computer applications to medicine. Medical informatics ,Biochemistry ,Mice ,Structural Biology ,3T3-L1 Cells ,Animals ,Cluster Analysis ,Microarray databases ,Computer Simulation ,lcsh:QH301-705.5 ,Molecular Biology ,DNA Primers ,Oligonucleotide Array Sequence Analysis ,Genetics ,Models, Statistical ,Microarray analysis techniques ,Gene Expression Profiling ,Methodology Article ,Applied Mathematics ,Reproducibility of Results ,Computer Science Applications ,Gene expression profiling ,Concordance correlation coefficient ,lcsh:Biology (General) ,Research Design ,Sample size determination ,Data Interpretation, Statistical ,Sample Size ,Regression Analysis ,lcsh:R858-859.7 ,DNA microarray ,Software - Abstract
Background DNA microarrays are popular tools for measuring gene expression of biological samples. This ever increasing popularity is ensuring that a large number of microarray studies are conducted, many of which with data publicly available for mining by other investigators. Under most circumstances, validation of differential expression of genes is performed on a gene to gene basis. Thus, it is not possible to generalize validation results to the remaining majority of non-validated genes or to evaluate the overall quality of these studies. Results We present an approach for the global validation of DNA microarray experiments that will allow researchers to evaluate the general quality of their experiment and to extrapolate validation results of a subset of genes to the remaining non-validated genes. We illustrate why the popular strategy of selecting only the most differentially expressed genes for validation generally fails as a global validation strategy and propose random-stratified sampling as a better gene selection method. We also illustrate shortcomings of often-used validation indices such as overlap of significant effects and the correlation coefficient and recommend the concordance correlation coefficient (CCC) as an alternative. Conclusion We provide recommendations that will enhance validity checks of microarray experiments while minimizing the need to run a large number of labour-intensive individual validation assays.
- Published
- 2006
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.