682 results on '"Genomics standards"'
Search Results
602. The power of ethics: a case study from Sweden on the social life of moral concerns in policy processes.
- Author
-
Hoeyer K
- Subjects
- Anthropology, Cultural trends, Genomics standards, Humans, Sweden, Tissue Banks trends, Anthropology, Cultural ethics, Genomics ethics, Public Policy, Tissue Banks ethics
- Abstract
In this paper I report on an ethnographic study of an ethics policy developed by a start-up genomics company at the time it gained all commercial rights to a population-based biobank in the town of Umeå in northern Sweden. Tracing the interdependencies between power and morality, my research compares moral reflections and stances among 1) policymakers, 2) health professionals and 3) donors, in relation to the issues identified in the policy. These people seem to agree that trust and fairness are important issues and that 'something' needs protection in the face of commercial genetic research. However, their perceptions of trust, fairness and what it is that needs protection differ significantly. I conclude by considering the implications of variances in moral perspectives for the social study of ethics.
- Published
- 2006
- Full Text
- View/download PDF
603. Impact of microarray data quality on genomic data submissions to the FDA.
- Author
-
Frueh FW
- Subjects
- Government Regulation, Reference Standards, United States, Databases, Genetic, Genomics standards, Guidelines as Topic, Information Storage and Retrieval standards, Oligonucleotide Array Sequence Analysis standards, Quality Assurance, Health Care standards, United States Food and Drug Administration
- Published
- 2006
- Full Text
- View/download PDF
604. Data quality in genomics and microarrays.
- Author
-
Ji H and Davis RW
- Subjects
- Reference Standards, United States, Databases, Genetic standards, Genomics standards, Government Regulation, Guidelines as Topic, Information Storage and Retrieval standards, Oligonucleotide Array Sequence Analysis standards, Quality Assurance, Health Care standards
- Published
- 2006
- Full Text
- View/download PDF
605. Finding function: evaluation methods for functional genomic data.
- Author
-
Myers CL, Barrett DR, Hibbs MA, Huttenhower C, and Troyanskaya OG
- Subjects
- Algorithms, Computational Biology standards, Databases, Genetic standards, Genomics standards, Proteomics methods, Proteomics standards, Reproducibility of Results, Software standards, Computational Biology methods, Genomics methods
- Abstract
Background: Accurate evaluation of the quality of genomic or proteomic data and computational methods is vital to our ability to use them for formulating novel biological hypotheses and directing further experiments. There is currently no standard approach to evaluation in functional genomics. Our analysis of existing approaches shows that they are inconsistent and contain substantial functional biases that render the resulting evaluations misleading both quantitatively and qualitatively. These problems make it essentially impossible to compare computational methods or large-scale experimental datasets and also result in conclusions that generalize poorly in most biological applications., Results: We reveal issues with current evaluation methods here and suggest new approaches to evaluation that facilitate accurate and representative characterization of genomic methods and data. Specifically, we describe a functional genomics gold standard based on curation by expert biologists and demonstrate its use as an effective means of evaluation of genomic approaches. Our evaluation framework and gold standard are freely available to the community through our website., Conclusion: Proper methods for evaluating genomic data and computational approaches will determine how much we, as a community, are able to learn from the wealth of available data. We propose one possible solution to this problem here but emphasize that this topic warrants broader community discussion.
- Published
- 2006
- Full Text
- View/download PDF
606. How reliable are empirical genomic scans for selective sweeps?
- Author
-
Teshima KM, Coop G, and Przeworski M
- Subjects
- Africa South of the Sahara, Demography, Gene Frequency, Genomics standards, Humans, Models, Genetic, Mutation, Research Design, Zea mays, Genetics, Population methods, Genomics methods, Polymorphism, Genetic, Selection, Genetic
- Abstract
The beneficial substitution of an allele shapes patterns of genetic variation at linked sites. Thus, in principle, adaptations can be mapped by looking for the signature of directional selection in polymorphism data. In practice, such efforts are hampered by the need for an accurate characterization of the demographic history of the species and of the effects of positive selection. In an attempt to circumvent these difficulties, researchers are increasingly taking a purely empirical approach, in which a large number of genomic regions are ordered by summaries of the polymorphism data, and loci with extreme values are considered to be likely targets of positive selection. We evaluated the reliability of the "empirical" approach, focusing on applications to human data and to maize. To do so, we considered a coalescent model of directional selection in a sensible demographic setting, allowing for selection on standing variation as well as on a new mutation. Our simulations suggest that while empirical approaches will identify several interesting candidates, they will also miss many--in some cases, most--loci of interest. The extent of the trade-off depends on the mode of positive selection and the demographic history of the population. Specifically, the false-discovery rate is higher when directional selection involves a recessive rather than a co-dominant allele, when it acts on a previously neutral rather than a new allele, and when the population has experienced a population bottleneck rather than maintained a constant size. One implication of these results is that, insofar as attributes of the beneficial mutation (e.g., the dominance coefficient) affect the power to detect targets of selection, genomic scans will yield an unrepresentative subset of loci that contribute to adaptations.
- Published
- 2006
- Full Text
- View/download PDF
607. Generation Challenge Programme (GCP): standards for crop data.
- Author
-
Bruskiewich R, Davenport G, Hazekamp T, Metz T, Ruiz M, Simon R, Takeya M, Lee J, Senger M, McLaren G, and Van Hintum T
- Subjects
- Developing Countries, Software standards, Crops, Agricultural genetics, Genomics standards, Molecular Biology standards
- Abstract
The Generation Challenge Programme (GCP) is an international research consortium striving to apply molecular biological advances to crop improvement for developing countries. Central to its activities is the creation of a next generation global crop information platform and network to share genetic resources, genomics, and crop improvement information. This system is being designed based on a comprehensive scientific domain object model and associated shared ontology. This model covers germplasm, genotype, phenotype, functional genomics, and geographical information data types needed in GCP research. This paper provides an overview of this modeling effort.
- Published
- 2006
- Full Text
- View/download PDF
608. Development of FuGO: an ontology for functional genomics investigations.
- Author
-
Whetzel PL, Brinkman RR, Causton HC, Fan L, Field D, Fostel J, Fragoso G, Gray T, Heiskanen M, Hernandez-Boussard T, Morrison N, Parkinson H, Rocca-Serra P, Sansone SA, Schober D, Smith B, Stevens R, Stoeckert CJ Jr, Taylor C, White J, and Wood A
- Subjects
- Biomedical Research organization & administration, Genomics organization & administration, Terminology as Topic, Workforce, Biomedical Research standards, Genomics standards
- Abstract
The development of the Functional Genomics Investigation Ontology (FuGO) is a collaborative, international effort that will provide a resource for annotating functional genomics investigations, including the study design, protocols and instrumentation used, the data generated and the types of analysis performed on the data. FuGO will contain both terms that are universal to all functional genomics investigations and those that are domain specific. In this way, the ontology will serve as the "semantic glue" to provide a common understanding of data from across these disparate data sources. In addition, FuGO will reference out to existing mature ontologies to avoid the need to duplicate these resources, and will do so in such a way as to enable their ease of use in annotation. This project is in the early stages of development; the paper will describe efforts to initiate the project, the scope and organization of the project, the work accomplished to date, and the challenges encountered, as well as future plans.
- Published
- 2006
- Full Text
- View/download PDF
609. A strategy capitalizing on synergies: the Reporting Structure for Biological Investigation (RSBI) working group.
- Author
-
Sansone SA, Rocca-Serra P, Tong W, Fostel J, Morrison N, and Jones AR
- Subjects
- Nutritional Physiological Phenomena genetics, Semantics, Toxicogenetics standards, Databases, Genetic standards, Genomics standards, Oligonucleotide Array Sequence Analysis standards
- Abstract
In this article we present the Reporting Structure for Biological Investigation (RSBI), a working group under the Microarray Gene Expression Data (MGED) Society umbrella. RSBI brings together several communities to tackle the challenges associated with integrating data and representing complex biological investigations, employing multiple OMICS technologies. Currently, RSBI includes environmental genomics, nutrigenomics and toxicogenomics communities, where independent activities are underway to develop databases and establish data communication standards within their respective domains. The RSBI working group has been conceived as a "single point of focus" for these communities, conforming to general accepted view that duplication and incompatibility should be avoided where possible. This endeavour has aimed to synergize insular solutions into one common terminology between biologically driven standardisation efforts and has also resulted in strong collaborations and shared understanding between those in the technological domain. Through extensive liaisons with many standards efforts, several threads have been woven with the hope that ultimately technology-centered standards and their specific extensions into biological domains of interest will not only stand alone, but will also be able to function together, as interchangeable modules.
- Published
- 2006
- Full Text
- View/download PDF
610. Genome Reviews: standardizing content and representation of information about complete genomes.
- Author
-
Sterk P, Kersey PJ, and Apweiler R
- Subjects
- Animals, Humans, Databases, Nucleic Acid standards, Genome, Genome, Human, Genomics standards
- Abstract
The Genome Reviews database provides up-to-date, standardized, and comprehensively annotated views of the genomic sequence of organisms with completely deciphered genomes. Currently, Genome Reviews contains information about the genomes of archaea, bacteria, and selected lower eukaryotes. Expansion to viral genomes and additional eukaryotes is planned. Genome Reviews is available for download in relational and flat file formats. In this paper, the rationale behind the creation of Genome Reviews, the approach taken in standardizing the data and its representation, and particular issues encountered in doing so are described.
- Published
- 2006
- Full Text
- View/download PDF
611. Requirements and standards for organelle genome databases.
- Author
-
Boore JL
- Subjects
- Animals, Chloroplasts genetics, Humans, Mitochondria genetics, Databases, Nucleic Acid standards, Genome, Genome, Human, Genomics standards, Organelles genetics
- Abstract
Mitochondria and plastids (collectively called organelles) descended from prokaryotes that adopted an intracellular, endosymbiotic lifestyle within early eukaryotes. Comparisons of their remnant genomes address a wide variety of biological questions, especially when including the genomes of their prokaryotic relatives and the many genes transferred to the eukaryotic nucleus during the transitions from endosymbiont to organelle. The pace of producing complete organellar genome sequences now makes it unfeasible to do broad comparisons using the primary literature and, even if it were feasible, it is now becoming uncommon for journals to accept detailed descriptions of genome-level features. Unfortunately, no database is completely useful for this task, since they have little standardization and are riddled with error. Further, the descriptors necessary to make full use of these data are generally lacking. Here, I outline what is currently wrong and what must be done to make this data useful to the scientific community.
- Published
- 2006
- Full Text
- View/download PDF
612. Evidence standards in experimental and inferential INSDC Third Party Annotation data.
- Author
-
Cochrane G, Bates K, Apweiler R, Tateno Y, Mashima J, Kosuge T, Mizrachi IK, Schafer S, and Fetchko M
- Subjects
- Animals, Data Collection standards, Humans, Databases, Nucleic Acid standards, Genomics standards
- Abstract
The Third Party Annotation (TPA) project collects and presents high-quality annotation of nucleotide sequence. Annotation is submitted by researchers who have not themselves generated novel nucleotide sequence. In its first few years, the resource has proven to be popular with submitters from a range of biological research areas. Central to the project is the requirement for high-quality data, resulting from experimental and inferred analysis discussed in peer-reviewed publications. The data are divided into two tiers: those with experimental evidence and those with inferential evidence. Standards for TPA are detailed and illustrated with the aid of case studies.
- Published
- 2006
- Full Text
- View/download PDF
613. Meeting report: eGenomics: Cataloguing our Complete Genome Collection II.
- Author
-
Field D, Morrison N, Selengut J, and Sterk P
- Subjects
- Animals, Humans, Databases as Topic standards, Genome, Genome, Human, Genomics standards
- Abstract
This article summarizes the proceedings of the "eGenomics: Cataloguing our Complete Genome Collection II" workshop held November 10-11, 2005, at the European Bioinformatics Institute. This exploratory workshop, organized by members of the Genomic Standards Consortium (GSC), brought together researchers from the genomic, functional OMICS, and computational biology communities to discuss standardization activities across a range of projects. The workshop proceedings and outcomes are set to help guide the development of the GSC's Minimal Information about a Genome Sequence (MIGS) specification.
- Published
- 2006
- Full Text
- View/download PDF
614. Taking the first steps towards a standard for reporting on phylogenies: Minimum Information About a Phylogenetic Analysis (MIAPA).
- Author
-
Leebens-Mack J, Vision T, Brenner E, Bowers JE, Cannon S, Clement MJ, Cunningham CW, dePamphilis C, deSalle R, Doyle JJ, Eisen JA, Gu X, Harshman J, Jansen RK, Kellogg EA, Koonin EV, Mishler BD, Philippe H, Pires JC, Qiu YL, Rhee SY, Sjölander K, Soltis DE, Soltis PS, Stevenson DW, Wall K, Warnow T, and Zmasek C
- Subjects
- Genomics standards, Phylogeny, Reference Standards
- Abstract
In the eight years since phylogenomics was introduced as the intersection of genomics and phylogenetics, the field has provided fundamental insights into gene function, genome history and organismal relationships. The utility of phylogenomics is growing with the increase in the number and diversity of taxa for which whole genome and large transcriptome sequence sets are being generated. We assert that the synergy between genomic and phylogenetic perspectives in comparative biology would be enhanced by the development and refinement of minimal reporting standards for phylogenetic analyses. Encouraged by the development of the Minimum Information About a Microarray Experiment (MIAME) standard, we propose a similar roadmap for the development of a Minimal Information About a Phylogenetic Analysis (MIAPA) standard. Key in the successful development and implementation of such a standard will be broad participation by developers of phylogenetic analysis software, phylogenetic database developers, practitioners of phylogenomics, and journal editors.
- Published
- 2006
- Full Text
- View/download PDF
615. Annotation of environmental OMICS data: application to the transcriptomics domain.
- Author
-
Morrison N, Wood AJ, Hancock D, Shah S, Hakes L, Gray T, Tiwari B, Kille P, Cossins A, Hegarty M, Allen MJ, Wilson WH, Olive P, Last K, Kramer C, Bailhache T, Reeves J, Pallett D, Warne J, Nashar K, Parkinson H, Sansone SA, Rocca-Serra P, Stevens R, Snape J, Brass A, and Field D
- Subjects
- Meta-Analysis as Topic, Ecology standards, Environment, Gene Expression Profiling, Genomics standards, Oligonucleotide Array Sequence Analysis
- Abstract
Researchers working on environmentally relevant organisms, populations, and communities are increasingly turning to the application of OMICS technologies to answer fundamental questions about the natural world, how it changes over time, and how it is influenced by anthropogenic factors. In doing so, the need to capture meta-data that accurately describes the biological "source" material used in such experiments is growing in importance. Here, we provide an overview of the formation of the "Env" community of environmental OMICS researchers and its efforts at considering the meta-data capture needs of those working in environmental OMICS. Specifically, we discuss the development to date of the Env specification, an informal specification including descriptors related to geographic location, environment, organism relationship, and phenotype. We then describe its application to the description of environmental transcriptomic experiments and how we have used it to extend the Minimum Information About a Microarray Experiment (MIAME) data standard to create a domain-specific extension that we have termed MIAME/Env. Finally, we make an open call to the community for participation in the Env Community and its future activities.
- Published
- 2006
- Full Text
- View/download PDF
616. FuGE: Functional Genomics Experiment Object Model.
- Author
-
Jones AR, Pizarro A, Spellman P, and Miller M
- Subjects
- Computer Simulation, Genomics standards, Oligonucleotide Array Sequence Analysis standards, Proteomics standards
- Abstract
This is an interim report on the Functional Genomics Experiment (FuGE) Object Model. FuGE is a framework for creating data standards for high-throughput biological experiments, developed by a consortium of researchers from academia and industry. FuGE supports rich annotation of samples, protocols, instruments, and software, as well as providing extension points for technology specific details. It has been adopted by microarray and proteomics standards bodies as a basis for forthcoming standards. It is hoped that standards developers for other omics techniques will join this collaborative effort; widespread adoption will allow uniform annotation of common parts of functional genomics workflows, reduce standard development and learning times through the sharing of consistent practice, and ease the construction of software for accessing and integrating functional genomics data.
- Published
- 2006
- Full Text
- View/download PDF
617. Under the MIAME sun.
- Subjects
- Internationality, Databases, Genetic standards, Genomics standards, Oligonucleotide Array Sequence Analysis standards, Protein Interaction Mapping standards, Terminology as Topic, Vocabulary, Controlled
- Published
- 2006
- Full Text
- View/download PDF
618. Concept of sample in OMICS technology.
- Author
-
Morrison N, Cochrane G, Faruque N, Tatusova T, Tateno Y, Hancock D, and Field D
- Subjects
- Animals, Humans, Oligonucleotide Array Sequence Analysis standards, Databases, Nucleic Acid standards, Genome, Genome, Human, Genomics standards, Proteome genetics, Proteomics standards
- Abstract
Fundamental biological processes can now be studied by applying the full range of OMICS technologies (genomics, transcriptomics, proteomics, metabolomics, and beyond) to the same biological sample. Clearly, it would be desirable if the concept of sample were shared among these technologies, especially as up until the time a biological sample is prepared for use in a specific OMICS assay, its description is inherently technology independent. Sharing a common informatic representation would encourage data sharing (rather than data replication), thereby reducing redundant data capture and the potential for error. This would result in a significant degree of harmonization across different OMICS data standardization activities, a task that is critical if we are to integrate data from these different data sources. Here, we review the current concept of sample in OMICS technologies as it is being dealt with by different OMICS standardization initiatives and discuss the special role that the newly formed Genomic Standards Consortium (GSC) might have to play in this domain.
- Published
- 2006
- Full Text
- View/download PDF
619. Methodological challenges of genomic research--the CARGO study.
- Author
-
Deng MC, Eisen HJ, and Mehra MR
- Subjects
- Genomics standards, Humans, Probability, Reproducibility of Results, Genomics methods
- Published
- 2006
- Full Text
- View/download PDF
620. Annotating the genome of Medicago truncatula.
- Author
-
Town CD
- Subjects
- Automation, Forecasting, Genomics standards, Software, Genome, Plant, Genomics methods, Medicago truncatula genetics
- Abstract
Medicago truncatula will be among the first plant species to benefit from the completion of a whole-genome sequencing project. For each of these species, Arabidopsis, rice and now poplar and Medicago, annotation, the process of identifying gene structures and defining their functions, is essential for the research community to benefit from the sequence data generated. Annotation of the Arabidopsis genome involved gene-by-gene curation of the entire genome, but the larger genomes of rice, Medicago and other species necessitate the automation of the annotation process. Profiting from the experience gained from previous whole-genome efforts, a uniform set of Medicago gene annotations has been generated by coordinated international effort and, along with other views of the genome data, has been provided to the research community at several websites.
- Published
- 2006
- Full Text
- View/download PDF
621. A plaidoyer for 'systems immunology'.
- Author
-
Benoist C, Germain RN, and Mathis D
- Subjects
- Humans, Allergy and Immunology trends, Genomics standards, Genomics statistics & numerical data, Proteomics standards, Proteomics statistics & numerical data
- Abstract
A complete understanding of the immune system will ultimately require an integrated perspective on how genetic and epigenetic entities work together to produce the range of physiologic and pathologic behaviors characteristic of immune function. The immune network encompasses all of the connections and regulatory associations between individual cells and the sum of interactions between gene products within a cell. With 30,000+ protein-coding genes in a mammalian genome, further compounded by microRNAs and yet unrecognized layers of genetic controls, connecting the dots of this network is a monumental task. Over the past few years, high-throughput techniques have allowed a genome-scale view on cell states and cell- or system-level responses to perturbations. Here, we observe that after an early burst of enthusiasm, there has developed a distinct resistance to placing a high value on global genomic or proteomic analyses. Such reluctance has affected both the practice and the publication of immunological science, resulting in a substantial impediment to the advances in our understanding that such large-scale studies could potentially provide. We propose that distinct standards are needed for validation, evaluation, and visualization of global analyses, such that in-depth descriptions of cellular responses may complement the gene/factor-centric approaches currently in favor.
- Published
- 2006
- Full Text
- View/download PDF
622. Clinical genomics data standards for pharmacogenetics and pharmacogenomics.
- Author
-
Shabo A
- Subjects
- Clinical Medicine statistics & numerical data, Humans, Reference Standards, Genomics standards, Genomics statistics & numerical data, Pharmacogenetics standards, Pharmacogenetics statistics & numerical data
- Abstract
This special report concerns a talk on data standards given at a workshop entitled 'An International Perspective on Pharmacogenetics: The Intersections between Innovation, Regulation and Health Delivery', which was held by the Organization for Economic Co-operation and Development (OECD) on October 17-19, 2005, in Rome, Italy. The worlds of healthcare and life sciences (HCLS) are extremely fragmented in terms of their underlying information technology, making it difficult to semantically exchange information between disparate entities. While we have reached the point where functional interoperability is ubiquitous, we are still far from achieving true semantic interoperability where a receiving system can use incoming data as though it was created internally. The critical enablers of semantic interoperability are information standards dedicated to HCLS data, spanning all the way from biological research data to clinical research and clinical trials, and finally to healthcare clinical data. The challenge lies in integrating various data standards based on predetermined goals, thereby improving the quality of care provided to patients.
- Published
- 2006
- Full Text
- View/download PDF
623. Designing, testing, and validating a microarray for stem cell characterization.
- Author
-
Luo Y, Bhattacharya B, Yang AX, Puri RK, and Rao MS
- Subjects
- Animals, Cell Differentiation, Genome, Human, Genomics methods, Genomics standards, Humans, Hybridization, Genetic, Mice, Neurons cytology, Pluripotent Stem Cells cytology, Quality Control, Gene Expression Profiling methods, Gene Expression Profiling standards, Oligonucleotide Array Sequence Analysis methods, Oligonucleotide Array Sequence Analysis standards, Pluripotent Stem Cells physiology
- Abstract
Microarray technology is a powerful tool that allows for simultaneous assessment of the expression of thousands of genes and identification of gene expression patterns associated with specific cell types. Here we describe a protocol using this method to examine stem cells.
- Published
- 2006
- Full Text
- View/download PDF
624. Pairagon+N-SCAN_EST: a model-based gene annotation pipeline.
- Author
-
Arumugam M, Wei C, Brown RH, and Brent MR
- Subjects
- Base Sequence, Computational Biology standards, DNA, Complementary analysis, Genes, Genome, Human, Genomics standards, Humans, Models, Statistical, Open Reading Frames, Phylogeny, RNA, Messenger analysis, Computational Biology methods, Expressed Sequence Tags, Genomics methods, Sequence Alignment, Software
- Abstract
Background: This paper describes Pairagon+N-SCAN_EST, a gene annotation pipeline that uses only native alignments. For each expressed sequence it chooses the best genomic alignment. Systems like ENSEMBL and ExoGean rely on trans alignments, in which expressed sequences are aligned to the genomic loci of putative homologs. Trans alignments contain a high proportion of mismatches, gaps, and/or apparently unspliceable introns, compared to alignments of cDNA sequences to their native loci. The Pairagon+N-SCAN_EST pipeline's first stage is Pairagon, a cDNA-to-genome alignment program based on a PairHMM probability model. This model relies on prior knowledge, such as the fact that introns must begin with GT, GC, or AT and end with AG or AC. It produces very precise alignments of high quality cDNA sequences. In the genomic regions between Pairagon's cDNA alignments, the pipeline combines EST alignments with de novo gene prediction by using N-SCAN_EST. N-SCAN_EST is based on a generalized HMM probability model augmented with a phylogenetic conservation model and EST alignments. It can predict complete transcripts by extending or merging EST alignments, but it can also predict genes in regions without EST alignments. Because they are based on probability models, both Pairagon and N-SCAN_EST can be trained automatically for new genomes and data sets., Results: On the ENCODE regions of the human genome, Pairagon+N-SCAN_EST was as accurate as any other system tested in the EGASP assessment, including ENSEMBL and ExoGean., Conclusion: With sufficient mRNA/EST evidence, genome annotation without trans alignments can compete successfully with systems like ENSEMBL and ExoGean, which use trans alignments.
- Published
- 2006
- Full Text
- View/download PDF
625. Large-scale analysis of neural stem cells and progenitor cells.
- Author
-
Shin S and Rao MS
- Subjects
- Animals, Gene Expression Profiling standards, Genomics standards, Humans, Gene Expression Profiling methods, Genomics methods, Neurons cytology, Stem Cells cytology, Stem Cells physiology
- Abstract
The past few years have seen remarkable progress in our understanding of stem cell biology. The wealth of genomic data and the multiplicity of sources have enabled researchers to begin to profile stem cells in detail. In this paper we describe the biological and technical controls necessary to obtain reliable data and the relative merits of various large-scale analytical techniques including microarray, expressed sequence tag enumeration, serial analysis of gene expression and massively parallel signature sequencing. We suggest that while much has been learned, additional information remains to be gleaned by meta-analysis of existing data., (Copyright (c) 2006 S. Karger AG, Basel.)
- Published
- 2006
- Full Text
- View/download PDF
626. Exogean: a framework for annotating protein-coding genes in eukaryotic genomic DNA.
- Author
-
Djebali S, Delaplace F, and Roest Crollius H
- Subjects
- Amino Acid Sequence, Base Sequence, Computational Biology methods, Computational Biology standards, DNA analysis, Expressed Sequence Tags, Genes, Genomics standards, Humans, RNA Splice Sites, RNA, Messenger analysis, Sequence Alignment, Sequence Analysis, DNA, Sequence Analysis, Protein, Genome, Human, Genomics methods, Proteins genetics, Software
- Abstract
Background: Accurate and automatic gene identification in eukaryotic genomic DNA is more than ever of crucial importance to efficiently exploit the large volume of assembled genome sequences available to the community. Automatic methods have always been considered less reliable than human expertise. This is illustrated in the EGASP project, where reference annotations against which all automatic methods are measured are generated by human annotators and experimentally verified. We hypothesized that replicating the accuracy of human annotators in an automatic method could be achieved by formalizing the rules and decisions that they use, in a mathematical formalism., Results: We have developed Exogean, a flexible framework based on directed acyclic colored multigraphs (DACMs) that can represent biological objects (for example, mRNA, ESTs, protein alignments, exons) and relationships between them. Graphs are analyzed to process the information according to rules that replicate those used by human annotators. Simple individual starting objects given as input to Exogean are thus combined and synthesized into complex objects such as protein coding transcripts., Conclusion: We show here, in the context of the EGASP project, that Exogean is currently the method that best reproduces protein coding gene annotations from human experts, in terms of identifying at least one exact coding sequence per gene. We discuss current limitations of the method and several avenues for improvement.
- Published
- 2006
- Full Text
- View/download PDF
627. GPAC: benchmarking the sensitivity of genome informatics analysis to genome annotation completeness.
- Author
-
Arakawa K, Nakayama Y, and Tomita M
- Subjects
- Benchmarking, Databases, Genetic, Reproducibility of Results, Computational Biology standards, Computational Biology statistics & numerical data, Genome, Genomics standards, Genomics statistics & numerical data
- Abstract
In view of the recent explosion in genome sequence data, and the 200 or more complete genome sequences currently available, the importance of genome-scale bioinformatics analysis is increasing rapidly. However, computational genome informatics analyses often lack a statistical assessment of their sensitivity to the completeness of the functional annotation. Therefore, a pre-analysis method to automatically validate the sensitivity of computational genome analyses with regard to genome annotation completeness is useful for this purpose. In this report we developed the Gene Prediction Accuracy Classification (GPAC) test, which provides statistical evidence of sensitivity by repeating the same analysis for five different gene groups (classified according to annotation accuracy level), and for randomly sampled gene groups, with the same number of genes as each of the five classified groups. Variability in these results is then assessed, and if the results vary significantly with different data subsets, the analysis is considered "sensitive" to annotation completeness, and careful selection of data is advised prior to the actual in silico analysis. The GPAC test has been applied to the analyses of Sakai et al., 2001, and Ohno et al., 2001, and it revealed that the analysis of Ohno et al. was more sensitive to annotation completeness. It showed that GPAC could be employed to ascertain the sensitivity of an analysis. The GPAC bendhmarking software is freely available in the latest G-language Genome Analysis Environment package, at http://www.g-language.org/.
- Published
- 2006
628. EGASP: the human ENCODE Genome Annotation Assessment Project.
- Author
-
Guigó R, Flicek P, Abril JF, Reymond A, Lagarde J, Denoeud F, Antonarakis S, Ashburner M, Bajic VB, Birney E, Castelo R, Eyras E, Ucla C, Gingeras TR, Harrow J, Hubbard T, Lewis SE, and Reese MG
- Subjects
- Alternative Splicing, Animals, Computational Biology methods, Databases, Genetic, Genes, Genomics methods, Humans, Mice, RNA, Messenger analysis, Sequence Analysis, DNA, Sequence Analysis, RNA, Computational Biology standards, Genome, Human, Genomics standards
- Abstract
Background: We present the results of EGASP, a community experiment to assess the state-of-the-art in genome annotation within the ENCODE regions, which span 1% of the human genome sequence. The experiment had two major goals: the assessment of the accuracy of computational methods to predict protein coding genes; and the overall assessment of the completeness of the current human genome annotations as represented in the ENCODE regions. For the computational prediction assessment, eighteen groups contributed gene predictions. We evaluated these submissions against each other based on a 'reference set' of annotations generated as part of the GENCODE project. These annotations were not available to the prediction groups prior to the submission deadline, so that their predictions were blind and an external advisory committee could perform a fair assessment., Results: The best methods had at least one gene transcript correctly predicted for close to 70% of the annotated genes. Nevertheless, the multiple transcript accuracy, taking into account alternative splicing, reached only approximately 40% to 50% accuracy. At the coding nucleotide level, the best programs reached an accuracy of 90% in both sensitivity and specificity. Programs relying on mRNA and protein sequences were the most accurate in reproducing the manually curated annotations. Experimental validation shows that only a very small percentage (3.2%) of the selected 221 computationally predicted exons outside of the existing annotation could be verified., Conclusion: This is the first such experiment in human DNA, and we have followed the standards established in a similar experiment, GASP1, in Drosophila melanogaster. We believe the results presented here contribute to the value of ongoing large-scale annotation projects and should guide further experimental methods when being scaled up to the entire human genome sequence.
- Published
- 2006
- Full Text
- View/download PDF
629. GENCODE: producing a reference annotation for ENCODE.
- Author
-
Harrow J, Denoeud F, Frankish A, Reymond A, Chen CK, Chrast J, Lagarde J, Gilbert JG, Storey R, Swarbreck D, Rossier C, Ucla C, Hubbard T, Antonarakis SE, and Guigo R
- Subjects
- Chromosome Mapping, Computational Biology methods, Expressed Sequence Tags, Genes, Genomics methods, Humans, Pseudogenes, RNA, Messenger analysis, Reference Standards, Sequence Analysis, DNA, Sequence Analysis, RNA, Computational Biology standards, Genome, Human, Genomics standards, Proteins genetics
- Abstract
Background: The GENCODE consortium was formed to identify and map all protein-coding genes within the ENCODE regions. This was achieved by a combination of initial manual annotation by the HAVANA team, experimental validation by the GENCODE consortium and a refinement of the annotation based on these experimental results., Results: The GENCODE gene features are divided into eight different categories of which only the first two (known and novel coding sequence) are confidently predicted to be protein-coding genes. 5' rapid amplification of cDNA ends (RACE) and RT-PCR were used to experimentally verify the initial annotation. Of the 420 coding loci tested, 229 RACE products have been sequenced. They supported 5' extensions of 30 loci and new splice variants in 50 loci. In addition, 46 loci without evidence for a coding sequence were validated, consisting of 31 novel and 15 putative transcripts. We assessed the comprehensiveness of the GENCODE annotation by attempting to validate all the predicted exon boundaries outside the GENCODE annotation. Out of 1,215 tested in a subset of the ENCODE regions, 14 novel exon pairs were validated, only two of them in intergenic regions., Conclusion: In total, 487 loci, of which 434 are coding, have been annotated as part of the GENCODE reference set available from the UCSC browser. Comparison of GENCODE annotation with RefSeq and ENSEMBL show only 40% of GENCODE exons are contained within the two sets, which is a reflection of the high number of alternative splice forms with unique exons annotated. Over 50% of coding loci have been experimentally verified by 5' RACE for EGASP and the GENCODE collaboration is continuing to refine its annotation of 1% human genome with the aid of experimental validation.
- Published
- 2006
- Full Text
- View/download PDF
630. Performance assessment of promoter predictions on ENCODE regions in the EGASP experiment.
- Author
-
Bajic VB, Brent MR, Brown RH, Frankish A, Harrow J, Ohler U, Solovyev VV, and Tan SL
- Subjects
- Computational Biology standards, Databases, Genetic, Genes, Genomics standards, Humans, RNA, Messenger analysis, Sequence Analysis, DNA, Sequence Analysis, RNA, Computational Biology methods, Genome, Human, Genomics methods, Promoter Regions, Genetic
- Abstract
Background: This study analyzes the predictions of a number of promoter predictors on the ENCODE regions of the human genome as part of the ENCODE Genome Annotation Assessment Project (EGASP). The systems analyzed operate on various principles and we assessed the effectiveness of different conceptual strategies used to correlate produced promoter predictions with the manually annotated 5' gene ends., Results: The predictions were assessed relative to the manual HAVANA annotation of the 5' gene ends. These 5' gene ends were used as the estimated reference transcription start sites. With the maximum allowed distance for predictions of 1,000 nucleotides from the reference transcription start sites, the sensitivity of predictors was in the range 32% to 56%, while the positive predictive value was in the range 79% to 93%. The average distance mismatch of predictions from the reference transcription start sites was in the range 259 to 305 nucleotides. At the same time, using transcription start site estimates from DBTSS and H-Invitational databases as promoter predictions, we obtained a sensitivity of 58%, a positive predictive value of 92%, and an average distance from the annotated transcription start sites of 117 nucleotides. In this experiment, the best performing promoter predictors were those that combined promoter prediction with gene prediction. The main reason for this is the reduced promoter search space that resulted in smaller numbers of false positive predictions., Conclusion: The main finding, now supported by comprehensive data, is that the accuracy of human promoter predictors for high-throughput annotation purposes can be significantly improved if promoter prediction is combined with gene prediction. Based on the lessons learned in this experiment, we propose a framework for the preparation of the next similar promoter prediction assessment.
- Published
- 2006
- Full Text
- View/download PDF
631. JIGSAW, GeneZilla, and GlimmerHMM: puzzling out the features of human genes in the ENCODE regions.
- Author
-
Allen JE, Majoros WH, Pertea M, and Salzberg SL
- Subjects
- Computational Biology standards, Expressed Sequence Tags, Genome, Human, Genomics standards, Humans, Sequence Analysis, DNA, Computational Biology methods, Genes, Genomics methods, Software
- Abstract
Background: Predicting complete protein-coding genes in human DNA remains a significant challenge. Though a number of promising approaches have been investigated, an ideal suite of tools has yet to emerge that can provide near perfect levels of sensitivity and specificity at the level of whole genes. As an incremental step in this direction, it is hoped that controlled gene finding experiments in the ENCODE regions will provide a more accurate view of the relative benefits of different strategies for modeling and predicting gene structures., Results: Here we describe our general-purpose eukaryotic gene finding pipeline and its major components, as well as the methodological adaptations that we found necessary in accommodating human DNA in our pipeline, noting that a similar level of effort may be necessary by ourselves and others with similar pipelines whenever a new class of genomes is presented to the community for analysis. We also describe a number of controlled experiments involving the differential inclusion of various types of evidence and feature states into our models and the resulting impact these variations have had on predictive accuracy., Conclusion: While in the case of the non-comparative gene finders we found that adding model states to represent specific biological features did little to enhance predictive accuracy, for our evidence-based 'combiner' program the incorporation of additional evidence tracks tended to produce significant gains in accuracy for most evidence types, suggesting that improved modeling efforts at the hidden Markov model level are of relatively little value. We relate these findings to our current plans for future research.
- Published
- 2006
- Full Text
- View/download PDF
632. Reproducibility of microarray studies: concordance of current analysis methods.
- Author
-
Wayland MT and Bahn S
- Subjects
- Animals, Gene Expression, Humans, Oligonucleotide Array Sequence Analysis, Quality Control, Reproducibility of Results, Brain physiology, Gene Expression Profiling methods, Gene Expression Profiling standards, Genomics methods, Genomics standards
- Published
- 2006
- Full Text
- View/download PDF
633. EGASP: Introduction.
- Author
-
Reese MG and Guigó R
- Subjects
- Chromosome Mapping, Computational Biology standards, Genomics standards, Humans, Sequence Analysis, RNA, Computational Biology methods, Genome, Human, Genomics methods
- Published
- 2006
- Full Text
- View/download PDF
634. Calibration of multivariate scatter plots for exploratory analysis of relations within and between sets of variables in genomic research.
- Author
-
Graffelman J and van Eeuwijk F
- Subjects
- Algorithms, Genetic Markers, Genomics standards, Genotype, Solanum lycopersicum chemistry, Solanum lycopersicum genetics, Solanum lycopersicum metabolism, Phenotype, Principal Component Analysis, Regression Analysis, Software, Genomics statistics & numerical data, Multivariate Analysis
- Abstract
The scatter plot is a well known and easily applicable graphical tool to explore relationships between two quantitative variables. For the exploration of relations between multiple variables, generalisations of the scatter plot are useful. We present an overview of multivariate scatter plots focussing on the following situations. Firstly, we look at a scatter plot for portraying relations between quantitative variables within one data matrix. Secondly, we discuss a similar plot for the case of qualitative variables. Thirdly, we describe scatter plots for the relationships between two sets of variables where we focus on correlations. Finally, we treat plots of the relationships between multiple response and predictor variables, focussing on the matrix of regression coefficients. We will present both known and new results, where an important original contribution concerns a procedure for the inclusion of scales for the variables in multivariate scatter plots. We provide software for drawing such scales. We illustrate the construction and interpretation of the plots by means of examples on data collected in a genomic research program on taste in tomato.
- Published
- 2005
- Full Text
- View/download PDF
635. Proposed methods for testing and selecting the ERCC external RNA controls.
- Subjects
- Bacteria metabolism, Computational Biology methods, DNA metabolism, DNA Primers chemistry, Dose-Response Relationship, Drug, Humans, Models, Statistical, Nucleic Acid Hybridization, Oligonucleotide Array Sequence Analysis methods, Plasmids metabolism, RNA, Messenger metabolism, Reverse Transcriptase Polymerase Chain Reaction, Gene Expression Profiling, Genetic Techniques, Genomics methods, Genomics standards, RNA chemistry, RNA genetics, Reference Standards
- Abstract
The External RNA Control Consortium (ERCC) is an ad-hoc group with approximately 70 members from private, public, and academic organizations. The group is developing a set of external RNA control transcripts that can be used to assess technical performance in gene expression assays. The ERCC is now initiating the Testing Phase of the project, during which candidate external RNA controls will be evaluated in both microarray and QRT-PCR gene expression platforms. This document describes the proposed experiments and informatics process that will be followed to test and qualify individual controls. The ERCC is distributing this description of the proposed testing process in an effort to gain consensus and to encourage feedback from the scientific community. On October 4-5, 2005, the ERCC met to further review the document, clarify ambiguities, and plan next steps. A summary of this meeting and changes to the test plan are provided as an appendix to this manuscript.
- Published
- 2005
- Full Text
- View/download PDF
636. Both hypomethylation and hypermethylation in a 0.2-kb region of a DNA repeat in cancer.
- Author
-
Nishiyama R, Qi L, Lacey M, and Ehrlich M
- Subjects
- 5-Methylcytosine metabolism, Biomarkers, Tumor genetics, Blotting, Southern, Cloning, Molecular, CpG Islands genetics, Female, Gene Expression Regulation, Neoplastic, Genomics standards, Humans, Polymerase Chain Reaction methods, Reproducibility of Results, Transcription, Genetic, DNA Methylation, Genomics methods, Ovarian Neoplasms genetics, Wilms Tumor genetics
- Abstract
NBL2 is a tandem 1.4-kb DNA repeat, whose hypomethylation in hepatocellular carcinomas was shown previously to be an independent predictor of disease progression. Here, we examined methylation of all cytosine residues in a 0.2-kb subregion of NBL2 in ovarian carcinomas, Wilms' tumors, and diverse control tissues by hairpin-bisulfite PCR. This new genomic sequencing method detects 5-methylcytosine on covalently linked complementary strands of a DNA fragment. All DNA clones from normal somatic tissues displayed symmetrical methylation at seven CpG positions and no methylation or only hemimethylation at two others. Unexpectedly, 56% of cancer DNA clones had decreased methylation at some normally methylated CpG sites as well as increased methylation at one or both of the normally unmethylated sites. All 146 DNA clones from 10 cancers could be distinguished from all 91 somatic control clones by assessing methylation changes at three of these CpG sites. The special involvement of DNA methyltransferase 3B in NBL2 methylation was indicated by analysis of cells from immunodeficiency, centromeric region instability, and facial anomalies syndrome patients who have mutations in the gene encoding DNA methyltransferase 3B. Blot hybridization of 33 cancer DNAs digested with CpG methylation-sensitive enzymes confirmed that NBL2 arrays are unusually susceptible to cancer-linked hypermethylation and hypomethylation, consistent with our novel genomic sequencing findings. The combined Southern blot and genomic sequencing data indicate that some of the cancer-linked alterations in CpG methylation are occurring with considerable sequence specificity. NBL2 is an attractive candidate for an epigenetic cancer marker and for elucidating the nature of epigenetic changes in cancer.
- Published
- 2005
- Full Text
- View/download PDF
637. Genomics and medical devices: a new paradigm for health care.
- Author
-
Rados C
- Subjects
- Gene Expression Profiling instrumentation, Gene Expression Profiling standards, Genetics, Medical instrumentation, Genetics, Medical trends, Genomics standards, Genomics trends, Oligonucleotide Array Sequence Analysis methods, Oligonucleotide Array Sequence Analysis standards, United States, United States Food and Drug Administration, Equipment and Supplies standards, Genomics instrumentation
- Published
- 2005
638. NCRI informatics initiative.
- Author
-
Begent RH, Kerr P, Parkinson H, Reddington F, and Wilkinson JM
- Subjects
- Animals, Humans, Proteomics standards, United Kingdom, Computational Biology organization & administration, Computational Biology standards, Genomics standards, Government Agencies organization & administration, Government Programs organization & administration, Neoplasms, Research standards
- Published
- 2005
- Full Text
- View/download PDF
639. An analysis of extensible modelling for functional genomics data.
- Author
-
Jones AR and Paton NW
- Subjects
- Chemistry Techniques, Analytical, Mass Spectrometry, Models, Molecular, Software, Vocabulary, Controlled, Computer Simulation, Data Collection standards, Genomics methods, Genomics standards, Guidelines as Topic, Microarray Analysis standards, Models, Biological
- Abstract
Background: Several data formats have been developed for large scale biological experiments, using a variety of methodologies. Most data formats contain a mechanism for allowing extensions to encode unanticipated data types. Extensions to data formats are important because the experimental methodologies tend to be fairly diverse and rapidly evolving, which hinders the creation of formats that will be stable over time., Results: In this paper we review the data formats that exist in functional genomics, some of which have become de facto or de jure standards, with a particular focus on how each domain has been modelled, and how each format allows extensions. We describe the tasks that are frequently performed over data formats and analyse how well each task is supported by a particular modelling structure., Conclusion: From our analysis, we make recommendations as to the types of modelling structure that are most suitable for particular types of experimental annotation. There are several standards currently under development that we believe could benefit from systematically following a set of guidelines.
- Published
- 2005
- Full Text
- View/download PDF
640. Toxicogenomics in genetic toxicology and hazard determination--concluding remarks.
- Author
-
Sarrif A, van Delft JH, Gant TW, Kleinjans JC, and van Vliet E
- Subjects
- Animals, Gene Expression Profiling standards, Humans, Oligonucleotide Array Sequence Analysis standards, Reproducibility of Results, Research Design, Toxicogenetics standards, Genomics standards, Toxicogenetics trends
- Published
- 2005
- Full Text
- View/download PDF
641. Summary recommendations for standardization and reporting of metabolic analyses.
- Author
-
Lindon JC, Nicholson JK, Holmes E, Keun HC, Craig A, Pearce JT, Bruce SJ, Hardy N, Sansone SA, Antti H, Jonsson P, Daykin C, Navarange M, Beger RD, Verheij ER, Amberg A, Baunsgaard D, Cantor GH, Lehman-McKeeman L, Earll M, Wold S, Johansson E, Haselden JN, Kramer K, Thomas C, Lindberg J, Schuppe-Koistinen I, Wilson ID, Reily MD, Robertson DG, Senn H, Krotzky A, Kochhar S, Powell J, van der Ouderaa F, Plumb R, Schaefer H, and Spraul M
- Subjects
- Animals, Genomics standards, Humans, Models, Biological, Multivariate Analysis, Oligonucleotide Array Sequence Analysis standards, Proteomics standards, Research Design, Specimen Handling, Metabolism genetics, Metabolism physiology, Research standards
- Published
- 2005
- Full Text
- View/download PDF
642. Introducing the German Mouse Clinic: open access platform for standardized phenotyping.
- Author
-
Gailus-Durner V, Fuchs H, Becker L, Bolle I, Brielmeier M, Calzada-Wack J, Elvert R, Ehrhardt N, Dalke C, Franz TJ, Grundner-Culemann E, Hammelbacher S, Hölter SM, Hölzlwimmer G, Horsch M, Javaheri A, Kalaydjiev SV, Klempt M, Kling E, Kunder S, Lengger C, Lisse T, Mijalski T, Naton B, Pedersen V, Prehn C, Przemeck G, Racz I, Reinhard C, Reitmeir P, Schneider I, Schrewe A, Steinkamp R, Zybill C, Adamski J, Beckers J, Behrendt H, Favor J, Graw J, Heldmaier G, Höfler H, Ivandic B, Katus H, Kirchhof P, Klingenspor M, Klopstock T, Lengeling A, Müller W, Ohl F, Ollert M, Quintanilla-Martinez L, Schmidt J, Schulz H, Wolf E, Wurst W, Zimmer A, Busch DH, and de Angelis MH
- Subjects
- Animals, Genomics methods, Germany, International Cooperation, Databases, Genetic, Genomics organization & administration, Genomics standards, Mice genetics, Mice, Transgenic classification, Mice, Transgenic genetics, Phenotype
- Published
- 2005
- Full Text
- View/download PDF
643. Genomics, genetic epidemiology, and genomic medicine.
- Author
-
Lazaridis KN and Petersen GM
- Subjects
- Chromosome Mapping, Female, Forecasting, Genetic Testing, Genetics, Medical trends, Genome, Human, Genomics trends, Human Genome Project, Humans, Male, Microsatellite Repeats, Sensitivity and Specificity, Genetic Predisposition to Disease epidemiology, Genetics, Medical standards, Genomics standards
- Abstract
Medical science is on the threshold of unparalleled progress as a result of the advent of genomics and related disciplines. Human genomics, the study of structure, function, and interactions of all genes in the human genome, promises to improve the diagnosis, treatment, and prevention of disease. This opportunity is the result of the recent completion of the Human Genome Project. It is anticipated that genomics will bring to physicians a powerful means to discover hereditary elements that interact with environmental factors leading to disease. However, the expected transformation toward genomics-based medicine will occur over decades. It will require efforts of many scientists and physicians to begin now to sort out the vast amounts of information in the human genome and translate it to meaningful applications in clinical practice. Meanwhile, practicing physicians and health professionals need to be trained in the principles, applications, and limitations of genomics and genomic medicine. Only then will we be in a position to benefit patients, which is the ultimate goal of accelerating scientific progress in medicine. In this inaugural article, we introduce and discuss concepts, facts, and methods of genomics and genetic epidemiology that will be drawn on in the forthcoming topics of the clinical genomics series.
- Published
- 2005
- Full Text
- View/download PDF
644. MIAME guidelines.
- Author
-
Knudsen TB and Daston GP
- Subjects
- Animals, Gene Expression Profiling methods, Humans, Kinetics, RNA analysis, Computational Biology, Gene Expression Regulation, Developmental, Genomics standards, Guidelines as Topic standards, Oligonucleotide Array Sequence Analysis standards
- Published
- 2005
- Full Text
- View/download PDF
645. Construction of representative transcript and protein sets of human, mouse, and rat as a platform for their transcriptome and proteome analysis.
- Author
-
Kasukawa T, Katayama S, Kawaji H, Suzuki H, Hume DA, and Hayashizaki Y
- Subjects
- Animals, Cluster Analysis, Computational Biology, Gene Expression Profiling, Humans, Mice, Rats, Reference Standards, DNA, Complementary genetics, Genomics standards, Proteome standards
- Abstract
The number of mammalian transcripts identified by full-length cDNA projects and genome sequencing projects is increasing remarkably. Clustering them into a strictly nonredundant and comprehensive set provides a platform for functional analysis of the transcriptome and proteome, but the quality of the clustering and predictive usefulness have previously required manual curation to identify truncated transcripts and inappropriate clustering of closely related sequences. A Representative Transcript and Protein Sets (RTPS) pipeline was previously designed to identify the nonredundant and comprehensive set of mouse transcripts based on clustering of a large mouse full-length cDNA set (FANTOM2). Here we propose an alternative method that is more robust, requires less manual curation, and is applicable to other organisms in addition to mouse. RTPSs of human, mouse, and rat have been produced by this method and used for validation. Their comprehensiveness and quality are discussed by comparison with other clustering approaches. The RTPSs are available at .
- Published
- 2004
- Full Text
- View/download PDF
646. Summit calls for clear view of deposits in all biobanks.
- Author
-
Pearson H
- Subjects
- Disease Susceptibility, Genomics standards, Humans, Pharmacogenetics, Genomics methods, Guidelines as Topic, International Cooperation, Tissue Banks organization & administration
- Published
- 2004
- Full Text
- View/download PDF
647. Genomics and public health practice: a survey of nurses in local health departments in North Carolina.
- Author
-
Irwin DE, Millikan RC, Stevens R, Roche MI, Rakhra-Burris T, Davis MV, Mahanna EP, Duckworth S, and Whiteside HP Jr
- Subjects
- Centers for Disease Control and Prevention, U.S. standards, Family Health, Genetic Predisposition to Disease, Humans, Needs Assessment, North Carolina, Practice Guidelines as Topic standards, United States, Genomics education, Genomics standards, Medical History Taking standards, Public Health Nursing education
- Abstract
In order to examine the extent to which current public health practices incorporate information about genetic susceptibilities potentially obtained by a comprehensive family history, public health nurses in North Carolina were surveyed to assess the extent to which this information is routinely collected. In addition, we measured nurses' awareness of the Centers for Disease Control and Prevention's Genomic Competencies and assessed training needs related to genomics. A self-administered survey was distributed to all public health nurse supervisors, directors, consultants as well as Breast and Cervical Cancer Coordination Program managers in North Carolina. A 68.4% response rate (292/427) was obtained. The majority (88.7%) of nurses with regular patient contact report routine gathering of family history data for adult chronic diseases. Some key family history data components are routinely collected including the total number of affected relatives (76%), ethnicity information (57.5%), and age of chronic disease onset (31.8%). A minority of nurses (9%) reported awareness of the Genomic Competencies, and most (72.1%) acknowledged their need for training in order to achieve these competencies. Information collected by taking a family history can indicate a combination of genetic and environmental susceptibilities for chronic diseases.
- Published
- 2004
- Full Text
- View/download PDF
648. Molecular typing of human leukocyte antigen and related polymorphisms following whole genome amplification.
- Author
-
Shao W, Tang J, Dorak MT, Song W, Lobashevsky E, Cobbs CS, Wrensch MR, and Kaslow RA
- Subjects
- Africa, Artifacts, Black People genetics, Europe, Genomics standards, Humans, Reproducibility of Results, White People genetics, Genome, Human, Genomics methods, HLA Antigens genetics, Polymorphism, Genetic
- Abstract
Reliable, high-resolution genotyping of human leukocyte antigen (HLA) polymorphisms is often compromised by DNA samples of suboptimal quality or limited quantity. We tested the feasibility of molecular typing for variants at HLA and neighboring loci using whole genome amplification (WGA) strategy facilitated by the Phi29 DNA polymerase. With little (5-100 ng) starting genomic DNA of varying quality and source materials, WGA was deemed successful in 167 of 169 DNA from 47 cell lines, 100 European Americans, and 22 native Africans. The Phi29-processed DNA provided adequate templates for polymerase chain reaction (PCR)-based analyses of several HLA (A, B, C, DRB1, and DQB1) and related loci (HFE, MICA, and 10 microsatellites) in the 6p24.3-6p21.3 region, with PCR amplicons ranging from 92 to 2200 bp. Five different genotyping techniques resolved and confirmed 364 genotypes when both original and Phi29-processed DNA worked in PCRs. General population genetic analyses provided additional evidence that WGA may represent a reliable and simple approach to securing ample genomic DNA for typing HLA, MICA, and related variants.
- Published
- 2004
- Full Text
- View/download PDF
649. A plea for "omics" research in complex diseases such as multiple sclerosis--a change of mind is needed.
- Author
-
Martin R and Leppert D
- Subjects
- Biomedical Research methods, Computational Biology standards, Computational Biology trends, Databases, Nucleic Acid standards, Databases, Nucleic Acid trends, Drug Evaluation, Preclinical methods, Genomics methods, Genomics standards, Humans, Meta-Analysis as Topic, Multiple Sclerosis genetics, Multiple Sclerosis metabolism, Proteomics methods, Proteomics standards, Biomedical Research standards, Drug Evaluation, Preclinical standards, Multiple Sclerosis drug therapy
- Published
- 2004
- Full Text
- View/download PDF
650. National Science Foundation-sponsored workshop report. Draft plan for soybean genomics.
- Author
-
Stacey G, Vodkin L, Parrott WA, and Shoemaker RC
- Subjects
- Computational Biology standards, Computational Biology trends, Genomics standards, Genotype, Physical Chromosome Mapping, Genome, Plant, Genomics methods, Soybean Proteins genetics, Glycine max genetics
- Abstract
Recent efforts to coordinate and define a research strategy for soybean (Glycine max) genomics began with the establishment of a Soybean Genetics Executive Committee, which will serve as a communication focal point between the soybean research community and granting agencies. Secondly, a workshop was held to define a strategy to incorporate existing tools into a framework for advancing soybean genomics research. This workshop identified and ranked research priorities essential to making more informed decisions as to how to proceed with large scale sequencing and other genomics efforts. Most critical among these was the need to finalize a physical map and to obtain a better understanding of genome microstructure. Addressing these research needs will require pilot work on new technologies to demonstrate an ability to discriminate between recently duplicated regions in the soybean genome and pilot projects to analyze an adequate amount of random genome sequence to identify and catalog common repeats. The development of additional markers, reverse genetics tools, and bioinformatics is also necessary. Successful implementation of these goals will require close coordination among various working groups.
- Published
- 2004
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.