7 results on '"Jones Wendell D"'
Search Results
2. The MicroArray Quality Control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models.
- Author
-
Shi L, Campbell G, Jones WD, Campagne F, Wen Z, Walker SJ, Su Z, Chu TM, Goodsaid FM, Pusztai L, Shaughnessy JD Jr, Oberthuer A, Thomas RS, Paules RS, Fielden M, Barlogie B, Chen W, Du P, Fischer M, Furlanello C, Gallas BD, Ge X, Megherbi DB, Symmans WF, Wang MD, Zhang J, Bitter H, Brors B, Bushel PR, Bylesjo M, Chen M, Cheng J, Cheng J, Chou J, Davison TS, Delorenzi M, Deng Y, Devanarayan V, Dix DJ, Dopazo J, Dorff KC, Elloumi F, Fan J, Fan S, Fan X, Fang H, Gonzaludo N, Hess KR, Hong H, Huan J, Irizarry RA, Judson R, Juraeva D, Lababidi S, Lambert CG, Li L, Li Y, Li Z, Lin SM, Liu G, Lobenhofer EK, Luo J, Luo W, McCall MN, Nikolsky Y, Pennello GA, Perkins RG, Philip R, Popovici V, Price ND, Qian F, Scherer A, Shi T, Shi W, Sung J, Thierry-Mieg D, Thierry-Mieg J, Thodima V, Trygg J, Vishnuvajjala L, Wang SJ, Wu J, Wu Y, Xie Q, Yousef WA, Zhang L, Zhang X, Zhong S, Zhou Y, Zhu S, Arasappan D, Bao W, Lucas AB, Berthold F, Brennan RJ, Buness A, Catalano JG, Chang C, Chen R, Cheng Y, Cui J, Czika W, Demichelis F, Deng X, Dosymbekov D, Eils R, Feng Y, Fostel J, Fulmer-Smentek S, Fuscoe JC, Gatto L, Ge W, Goldstein DR, Guo L, Halbert DN, Han J, Harris SC, Hatzis C, Herman D, Huang J, Jensen RV, Jiang R, Johnson CD, Jurman G, Kahlert Y, Khuder SA, Kohl M, Li J, Li L, Li M, Li QZ, Li S, Li Z, Liu J, Liu Y, Liu Z, Meng L, Madera M, Martinez-Murillo F, Medina I, Meehan J, Miclaus K, Moffitt RA, Montaner D, Mukherjee P, Mulligan GJ, Neville P, Nikolskaya T, Ning B, Page GP, Parker J, Parry RM, Peng X, Peterson RL, Phan JH, Quanz B, Ren Y, Riccadonna S, Roter AH, Samuelson FW, Schumacher MM, Shambaugh JD, Shi Q, Shippy R, Si S, Smalter A, Sotiriou C, Soukup M, Staedtler F, Steiner G, Stokes TH, Sun Q, Tan PY, Tang R, Tezak Z, Thorn B, Tsyganova M, Turpaz Y, Vega SC, Visintainer R, von Frese J, Wang C, Wang E, Wang J, Wang W, Westermann F, Willey JC, Woods M, Wu S, Xiao N, Xu J, Xu L, Yang L, Zeng X, Zhang J, Zhang L, Zhang M, Zhao C, Puri RK, Scherf U, Tong W, and Wolfinger RD
- Subjects
- Animals, Breast Neoplasms diagnosis, Breast Neoplasms genetics, Disease Models, Animal, Female, Gene Expression Profiling methods, Gene Expression Profiling standards, Guidelines as Topic, Humans, Liver Diseases etiology, Liver Diseases pathology, Lung Diseases etiology, Lung Diseases pathology, Multiple Myeloma diagnosis, Multiple Myeloma genetics, Neoplasms diagnosis, Neuroblastoma diagnosis, Neuroblastoma genetics, Predictive Value of Tests, Quality Control, Rats, Survival Analysis, Liver Diseases genetics, Lung Diseases genetics, Neoplasms genetics, Neoplasms mortality, Oligonucleotide Array Sequence Analysis methods, Oligonucleotide Array Sequence Analysis standards
- Abstract
Gene expression data from microarrays are being applied to predict preclinical and clinical endpoints, but the reliability of these predictions has not been established. In the MAQC-II project, 36 independent teams analyzed six microarray data sets to generate predictive models for classifying a sample with respect to one of 13 endpoints indicative of lung or liver toxicity in rodents, or of breast cancer, multiple myeloma or neuroblastoma in humans. In total, >30,000 models were built using many combinations of analytical methods. The teams generated predictive models without knowing the biological meaning of some of the endpoints and, to mimic clinical reality, tested the models on data that had not been used for training. We found that model performance depended largely on the endpoint and team proficiency and that different approaches generated models of similar performance. The conclusions and recommendations from MAQC-II should be useful for regulatory agencies, study committees and independent investigators that evaluate methods for global gene expression analysis.
- Published
- 2010
- Full Text
- View/download PDF
3. Comparison of comparative genomic hybridization technologies across microarray platforms.
- Author
-
Hester SD, Reid L, Nowak N, Jones WD, Parker JS, Knudtson K, Ward W, Tiesman J, and Denslow ND
- Subjects
- Algorithms, Case-Control Studies, Chromosome Deletion, Chromosomes, Artificial, Bacterial, Gene Dosage, Genetic Variation, Genome, Human, HL-60 Cells, Humans, Nucleic Acid Amplification Techniques, Oligonucleotide Array Sequence Analysis economics, Reference Standards, Reproducibility of Results, Software, Chromosome Aberrations, Comparative Genomic Hybridization methods, Oligonucleotide Array Sequence Analysis methods
- Abstract
In the 2007 Association of Biomolecular Resource Facilities Microarray Research Group project, we analyzed HL-60 DNA with five platforms: Agilent, Affymetrix 500K, Affymetrix U133 Plus 2.0, Illumina, and RPCI 19K BAC arrays. Copy number variation was analyzed using circular binary segmentation (CBS) analysis of log ratio scores from four independently assessed hybridizations of each platform. Data obtained from these platforms were assessed for reproducibility and the ability to detect formerly reported copy number variations in HL-60. In HL-60, all of the tested platforms detected genomic DNA amplification of the 8q24 locus, trisomy 18, and monosomy X; and deletions at loci 5q11.2~q31, 9p21.3~p22, 10p12~p15, 14q22~q31, and 17p12~p13.3. In the HL-60 genome, at least two of the five platforms detected five novel losses and five novel gains. This report provides guidance in the selection of platforms based on this wide-ranging evaluation of available CGH platforms.
- Published
- 2009
4. The balance of reproducibility, sensitivity, and specificity of lists of differentially expressed genes in microarray studies.
- Author
-
Shi L, Jones WD, Jensen RV, Harris SC, Perkins RG, Goodsaid FM, Guo L, Croner LJ, Boysen C, Fang H, Qian F, Amur S, Bao W, Barbacioru CC, Bertholet V, Cao XM, Chu TM, Collins PJ, Fan XH, Frueh FW, Fuscoe JC, Guo X, Han J, Herman D, Hong H, Kawasaki ES, Li QZ, Luo Y, Ma Y, Mei N, Peterson RL, Puri RK, Shippy R, Su Z, Sun YA, Sun H, Thorn B, Turpaz Y, Wang C, Wang SJ, Warrington JA, Willey JC, Wu J, Xie Q, Zhang L, Zhang L, Zhong S, Wolfinger RD, and Tong W
- Subjects
- Computer Simulation, Models, Genetic, Models, Statistical, Reproducibility of Results, Sensitivity and Specificity, Algorithms, Data Interpretation, Statistical, Gene Expression Profiling methods, Genes genetics, Oligonucleotide Array Sequence Analysis methods
- Abstract
Background: Reproducibility is a fundamental requirement in scientific experiments. Some recent publications have claimed that microarrays are unreliable because lists of differentially expressed genes (DEGs) are not reproducible in similar experiments. Meanwhile, new statistical methods for identifying DEGs continue to appear in the scientific literature. The resultant variety of existing and emerging methods exacerbates confusion and continuing debate in the microarray community on the appropriate choice of methods for identifying reliable DEG lists., Results: Using the data sets generated by the MicroArray Quality Control (MAQC) project, we investigated the impact on the reproducibility of DEG lists of a few widely used gene selection procedures. We present comprehensive results from inter-site comparisons using the same microarray platform, cross-platform comparisons using multiple microarray platforms, and comparisons between microarray results and those from TaqMan - the widely regarded "standard" gene expression platform. Our results demonstrate that (1) previously reported discordance between DEG lists could simply result from ranking and selecting DEGs solely by statistical significance (P) derived from widely used simple t-tests; (2) when fold change (FC) is used as the ranking criterion with a non-stringent P-value cutoff filtering, the DEG lists become much more reproducible, especially when fewer genes are selected as differentially expressed, as is the case in most microarray studies; and (3) the instability of short DEG lists solely based on P-value ranking is an expected mathematical consequence of the high variability of the t-values; the more stringent the P-value threshold, the less reproducible the DEG list is. These observations are also consistent with results from extensive simulation calculations., Conclusion: We recommend the use of FC-ranking plus a non-stringent P cutoff as a straightforward and baseline practice in order to generate more reproducible DEG lists. Specifically, the P-value cutoff should not be stringent (too small) and FC should be as large as possible. Our results provide practical guidance to choose the appropriate FC and P-value cutoffs when selecting a given number of DEGs. The FC criterion enhances reproducibility, whereas the P criterion balances sensitivity and specificity.
- Published
- 2008
- Full Text
- View/download PDF
5. The MicroArray Quality Control (MAQC) project shows inter- and intraplatform reproducibility of gene expression measurements.
- Author
-
Shi L, Reid LH, Jones WD, Shippy R, Warrington JA, Baker SC, Collins PJ, de Longueville F, Kawasaki ES, Lee KY, Luo Y, Sun YA, Willey JC, Setterquist RA, Fischer GM, Tong W, Dragan YP, Dix DJ, Frueh FW, Goodsaid FM, Herman D, Jensen RV, Johnson CD, Lobenhofer EK, Puri RK, Schrf U, Thierry-Mieg J, Wang C, Wilson M, Wolber PK, Zhang L, Amur S, Bao W, Barbacioru CC, Lucas AB, Bertholet V, Boysen C, Bromley B, Brown D, Brunner A, Canales R, Cao XM, Cebula TA, Chen JJ, Cheng J, Chu TM, Chudin E, Corson J, Corton JC, Croner LJ, Davies C, Davison TS, Delenstarr G, Deng X, Dorris D, Eklund AC, Fan XH, Fang H, Fulmer-Smentek S, Fuscoe JC, Gallagher K, Ge W, Guo L, Guo X, Hager J, Haje PK, Han J, Han T, Harbottle HC, Harris SC, Hatchwell E, Hauser CA, Hester S, Hong H, Hurban P, Jackson SA, Ji H, Knight CR, Kuo WP, LeClerc JE, Levy S, Li QZ, Liu C, Liu Y, Lombardi MJ, Ma Y, Magnuson SR, Maqsodi B, McDaniel T, Mei N, Myklebost O, Ning B, Novoradovskaya N, Orr MS, Osborn TW, Papallo A, Patterson TA, Perkins RG, Peters EH, Peterson R, Philips KL, Pine PS, Pusztai L, Qian F, Ren H, Rosen M, Rosenzweig BA, Samaha RR, Schena M, Schroth GP, Shchegrova S, Smith DD, Staedtler F, Su Z, Sun H, Szallasi Z, Tezak Z, Thierry-Mieg D, Thompson KL, Tikhonova I, Turpaz Y, Vallanat B, Van C, Walker SJ, Wang SJ, Wang Y, Wolfinger R, Wong A, Wu J, Xiao C, Xie Q, Xu J, Yang W, Zhang L, Zhong S, Zong Y, and Slikker W Jr
- Subjects
- Equipment Design, Equipment Failure Analysis, Gene Expression Profiling methods, Quality Control, Reproducibility of Results, Sensitivity and Specificity, United States, Gene Expression Profiling instrumentation, Oligonucleotide Array Sequence Analysis instrumentation, Quality Assurance, Health Care methods
- Abstract
Over the last decade, the introduction of microarray technology has had a profound impact on gene expression research. The publication of studies with dissimilar or altogether contradictory results, obtained using different microarray platforms to analyze identical RNA samples, has raised concerns about the reliability of this technology. The MicroArray Quality Control (MAQC) project was initiated to address these concerns, as well as other performance and data analysis issues. Expression data on four titration pools from two distinct reference RNA samples were generated at multiple test sites using a variety of microarray-based and alternative technology platforms. Here we describe the experimental design and probe mapping efforts behind the MAQC project. We show intraplatform consistency across test sites as well as a high level of interplatform concordance in terms of genes identified as differentially expressed. This study provides a resource that represents an important first step toward establishing a framework for the use of microarrays in clinical and regulatory settings.
- Published
- 2006
- Full Text
- View/download PDF
6. Using RNA sample titrations to assess microarray platform performance and normalization techniques.
- Author
-
Shippy R, Fulmer-Smentek S, Jensen RV, Jones WD, Wolber PK, Johnson CD, Pine PS, Boysen C, Guo X, Chudin E, Sun YA, Willey JC, Thierry-Mieg J, Thierry-Mieg D, Setterquist RA, Wilson M, Lucas AB, Novoradovskaya N, Papallo A, Turpaz Y, Baker SC, Warrington JA, Shi L, and Herman D
- Subjects
- Algorithms, Reference Values, Reproducibility of Results, Sensitivity and Specificity, United States, Equipment Failure Analysis methods, Gene Expression Profiling instrumentation, Gene Expression Profiling standards, Oligonucleotide Array Sequence Analysis instrumentation, Oligonucleotide Array Sequence Analysis standards, RNA analysis, RNA genetics
- Abstract
We have assessed the utility of RNA titration samples for evaluating microarray platform performance and the impact of different normalization methods on the results obtained. As part of the MicroArray Quality Control project, we investigated the performance of five commercial microarray platforms using two independent RNA samples and two titration mixtures of these samples. Focusing on 12,091 genes common across all platforms, we determined the ability of each platform to detect the correct titration response across the samples. Global deviations from the response predicted by the titration ratios were observed. These differences could be explained by variations in relative amounts of messenger RNA as a fraction of total RNA between the two independent samples. Overall, both the qualitative and quantitative correspondence across platforms was high. In summary, titration samples may be regarded as a valuable tool, not only for assessing microarray platform performance and different analysis methods, but also for determining some underlying biological features of the samples.
- Published
- 2006
- Full Text
- View/download PDF
7. Comparison of Comparative Genomic Hybridization Technologies Across Microarray Platforms
- Author
-
Hester, Susan D., Reid, Laura, Nowak, Norma, Jones, Wendell D., Parker, Joel S., Knudtson, Kevin, Ward, William, Tiesman, Jay, and Denslow, Nancy D.
- Subjects
Chromosome Aberrations ,Chromosomes, Artificial, Bacterial ,Comparative Genomic Hybridization ,Genome, Human ,Gene Dosage ,Genetic Variation ,Reproducibility of Results ,HL-60 Cells ,Reference Standards ,Article ,Case-Control Studies ,Humans ,Chromosome Deletion ,Nucleic Acid Amplification Techniques ,Algorithms ,Software ,Oligonucleotide Array Sequence Analysis - Abstract
In the 2007 Association of Biomolecular Resource Facilities Microarray Research Group project, we analyzed HL-60 DNA with five platforms: Agilent, Affymetrix 500K, Affymetrix U133 Plus 2.0, Illumina, and RPCI 19K BAC arrays. Copy number variation was analyzed using circular binary segmentation (CBS) analysis of log ratio scores from four independently assessed hybridizations of each platform. Data obtained from these platforms were assessed for reproducibility and the ability to detect formerly reported copy number variations in HL-60. In HL-60, all of the tested platforms detected genomic DNA amplification of the 8q24 locus, trisomy 18, and monosomy X; and deletions at loci 5q11.2~q31, 9p21.3~p22, 10p12~p15, 14q22~q31, and 17p12~p13.3. In the HL-60 genome, at least two of the five platforms detected five novel losses and five novel gains. This report provides guidance in the selection of platforms based on this wide-ranging evaluation of available CGH platforms.
- Published
- 2009
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.