19 results on '"Shannon M. Bell"'
Search Results
2. Workshop Report: Catalyzing Knowledge-Driven Discovery in Environmental Health Sciences through a Harmonized Language
- Author
-
Stephanie Holmgren, Shannon M. Bell, Jessica Wignall, Christopher G. Duncan, Richard K. Kwok, Ryan Cronk, Kimberly Osborn, Steven Black, Anne Thessen, and Charles Schmitt
- Subjects
Health, Toxicology and Mutagenesis ,Public Health, Environmental and Occupational Health - Abstract
Harmonized language is essential to finding, sharing, and reusing large-scale, complex data. Gaps and barriers prevent the adoption of harmonized language approaches in environmental health sciences (EHS). To address this, the National Institute of Environmental Health Sciences and partners created the Environmental Health Language Collaborative (EHLC). The purpose of EHLC is to facilitate a community-driven effort to advance the development and adoption of harmonized language approaches in EHS. EHLC is a forum to pinpoint language harmonization gaps, to facilitate the development of, raise awareness of, and encourage the use of harmonization approaches and tools, and to develop new standards and recommendations. To ensure that EHLC’s focus and structure would be sustainable long-term and meet the needs of the field, EHLC launched an inaugural workshop in September 2021 focused on “Developing Sustainable Language Solutions” and “Building a Sustainable Community”. When the attendees were surveyed, 91% said harmonized language solutions would be of high value/benefit, and 60% agreed to continue contributing to EHLC efforts. Based on workshop discussions, future activities will focus on targeted collaborative use-case working groups in addition to offering education and training on ontologies, metadata, and standards, and developing an EHS language resource portal.
- Published
- 2023
- Full Text
- View/download PDF
3. CATMoS: Collaborative Acute Toxicity Modeling Suite
- Author
-
Tyler Peryea, Ahsan Habib Polash, Alessandra Roncaglioni, Daniel M. Wilson, Warren Casey, Patricia Ruiz, Nathalie Alépée, Sherif Farag, Giovanna J. Lavado, Kimberley M. Zorn, Alexey V. Zakharov, Davide Ballabio, Katrina M. Waters, Risa Sayre, Giuseppe Felice Mangiatordi, Orazio Nicolotti, Nicole Kleinstreuer, Pankaj R. Daga, Sean Ekins, Kamel Mansouri, Liguo Wang, Judy Strickland, Matthew J. Hirn, Sudin Bhattacharya, Dac-Trung Nguyen, Emilio Benfenati, Ignacio J. Tripodi, Amanda K. Parks, Garett Goh, Dennis G. Thomas, Glenn J. Myatt, Prachi Pradeep, Gergely Zahoranszky-Kohalmi, Anton Simeonov, Arthur C. Silva, Grace Patlewicz, Timothy Sheils, Stephen Boyd, Agnes L. Karmaus, Ahmed Sayed, Alex M. Clark, Todd M. Martin, Pavel Karpov, Jeffery M. Gearhart, Robert Rallo, D Allen, Charles Siegel, Zhen Zhang, Zijun Xiao, Alexander Tropsha, Stephen J. Capuzzi, Alexandru Korotcov, Carolina Horta Andrade, Noel Southall, Viviana Consonni, Igor V. Tetko, Jeremy M. Fitzpatrick, Andrew J. Wedlake, Denis Fourches, Zhongyu Wang, Vinicius M. Alves, Eugene N. Muratov, Timothy E. H. Allen, Andrea Mauri, James B. Brown, Alexandre Varnek, Yun Tang, Sanjeeva J. Wijeyesakere, Daniel P. Russo, Cosimo Toma, Christopher M. Grulke, Michael S. Lawless, Domenico Gadaleta, Paritosh Pande, Thomas Hartung, Jonathan M. Goodman, Kristijan Vukovic, Joyce V. Bastos, Daniela Trisciuzzi, Fagen F. Zhang, Domenico Alberga, Thomas Luechtefeld, Dan Marsh, Tyler R. Auernhammer, Shannon M. Bell, Xinhao Li, Brian J. Teppen, F. Lunghini, Sergey Sosnin, Hao Zhu, Feng Gao, Craig Rowlands, Tongan Zhao, R Todeschini, Valery Tkachenko, Francesca Grisoni, Hongbin Yang, Yaroslav Chushak, Maxim V. Fedorov, Heather L. Ciallella, Gilles Marcou, Goodman, Jonathan [0000-0002-8693-9136], Yang, Hongbin [0000-0001-6740-1632], Apollo - University of Cambridge Repository, Mansouri, K, Karmaus, A, Fitzpatrick, J, Patlewicz, G, Pradeep, P, Alberga, D, Alepee, N, Allen, T, Allen, D, Alves, V, Andrade, C, Auernhammer, T, Ballabio, D, Bell, S, Benfenati, E, Bhattacharya, S, Bastos, J, Boyd, S, Brown, J, Capuzzi, S, Chushak, Y, Ciallella, H, Clark, A, Consonni, V, Daga, P, Ekins, S, Farag, S, Fedorov, M, Fourches, D, Gadaleta, D, Gao, F, Gearhart, J, Goh, G, Goodman, J, Grisoni, F, Grulke, C, Hartung, T, Hirn, M, Karpov, P, Korotcov, A, Lavado, G, Lawless, M, Li, X, Luechtefeld, T, Lunghini, F, Mangiatordi, G, Marcou, G, Marsh, D, Martin, T, Mauri, A, Muratov, E, Myatt, G, Nguyen, D, Nicolotti, O, Note, R, Pande, P, Parks, A, Peryea, T, Polash, A, Rallo, R, Roncaglioni, A, Rowlands, C, Ruiz, P, Russo, D, Sayed, A, Sayre, R, Sheils, T, Siegel, C, Silva, A, Simeonov, A, Sosnin, S, Southall, N, Strickland, J, Tang, Y, Teppen, B, Tetko, I, Thomas, D, Tkachenko, V, Todeschini, R, Toma, C, Tripodi, I, Trisciuzzi, D, Tropsha, A, Varnek, A, Vukovic, K, Wang, Z, Wang, L, Waters, K, Wedlake, A, Wijeyesakere, S, Wilson, D, Xiao, Z, Yang, H, Zahoranszky-Kohalmi, G, Zakharov, A, Zhang, F, Zhang, Z, Zhao, T, Zhu, H, Zorn, K, Casey, W, Kleinstreuer, N, Chimie de la matière complexe (CMC), and Université de Strasbourg (UNISTRA)-Institut de Chimie du CNRS (INC)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
Health, Toxicology and Mutagenesis ,010501 environmental sciences ,Bioinformatics ,01 natural sciences ,03 medical and health sciences ,0302 clinical medicine ,Government Agencies ,CHIM/01 - CHIMICA ANALITICA ,Toxicity Tests, Acute ,Medicine ,Animals ,Computer Simulation ,030212 general & internal medicine ,United States Environmental Protection Agency ,consensus analysi ,0105 earth and related environmental sciences ,QSAR ,business.industry ,Research ,Acute Toxicity ,Public Health, Environmental and Occupational Health ,Acute toxicity ,United States ,3. Good health ,Rats ,machine learning ,Systemic toxicity ,13. Climate action ,Erratum ,business ,[CHIM.CHEM]Chemical Sciences/Cheminformatics ,Potential toxicity - Abstract
BACKGROUND: Humans are exposed to tens of thousands of chemical substances that need to be assessed for their potential toxicity. Acute systemic toxicity testing serves as the basis for regulatory hazard classification, labeling, and risk management. However, it is cost- and time-prohibitive to evaluate all new and existing chemicals using traditional rodent acute toxicity tests. In silico models built using existing data facilitate rapid acute toxicity predictions without using animals. OBJECTIVES: The U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) Acute Toxicity Workgroup organized an international collaboration to develop in silico models for predicting acute oral toxicity based on five different end points: Lethal Dose 50 (LD50 value, U.S. Environmental Protection Agency hazard (four) categories, Globally Harmonized System for Classification and Labeling hazard (five) categories, very toxic chemicals [LD50 (LD50≤50mg/kg)], and nontoxic chemicals (LD50>2,000mg/kg). METHODS: An acute oral toxicity data inventory for 11,992 chemicals was compiled, split into training and evaluation sets, and made available to 35 participating international research groups that submitted a total of 139 predictive models. Predictions that fell within the applicability domains of the submitted models were evaluated using external validation sets. These were then combined into consensus models to leverage strengths of individual approaches. RESULTS: The resulting consensus predictions, which leverage the collective strengths of each individual model, form the Collaborative Acute Toxicity Modeling Suite (CATMoS). CATMoS demonstrated high performance in terms of accuracy and robustness when compared with in vivo results. DISCUSSION: CATMoS is being evaluated by regulatory agencies for its utility and applicability as a potential replacement for in vivo rat acute oral toxicity studies. CATMoS predictions for more than 800,000 chemicals have been made available via the National Toxicology Program's Integrated Chemical Environment tools and data sets (ice.ntp.niehs.nih.gov). The models are also implemented in a free, standalone, open-source tool, OPERA, which allows predictions of new and untested chemicals to be made. https://doi.org/10.1289/EHP8495.
- Published
- 2021
- Full Text
- View/download PDF
4. Systematic Omics Analysis Review (SOAR) tool to support risk assessment.
- Author
-
Emma R McConnell, Shannon M Bell, Ila Cote, Rong-Lin Wang, Edward J Perkins, Natàlia Garcia-Reyero, Ping Gong, and Lyle D Burgoon
- Subjects
Medicine ,Science - Abstract
Environmental health risk assessors are challenged to understand and incorporate new data streams as the field of toxicology continues to adopt new molecular and systems biology technologies. Systematic screening reviews can help risk assessors and assessment teams determine which studies to consider for inclusion in a human health assessment. A tool for systematic reviews should be standardized and transparent in order to consistently determine which studies meet minimum quality criteria prior to performing in-depth analyses of the data. The Systematic Omics Analysis Review (SOAR) tool is focused on assisting risk assessment support teams in performing systematic reviews of transcriptomic studies. SOAR is a spreadsheet tool of 35 objective questions developed by domain experts, focused on transcriptomic microarray studies, and including four main topics: test system, test substance, experimental design, and microarray data. The tool will be used as a guide to identify studies that meet basic published quality criteria, such as those defined by the Minimum Information About a Microarray Experiment standard and the Toxicological Data Reliability Assessment Tool. Seven scientists were recruited to test the tool by using it to independently rate 15 published manuscripts that study chemical exposures with microarrays. Using their feedback, questions were weighted based on importance of the information and a suitability cutoff was set for each of the four topic sections. The final validation resulted in 100% agreement between the users on four separate manuscripts, showing that the SOAR tool may be used to facilitate the standardized and transparent screening of microarray literature for environmental human health risk assessment.
- Published
- 2014
- Full Text
- View/download PDF
5. In vitro to in vivo extrapolation for high throughput prioritization and decision making
- Author
-
Ted W. Simon, Shannon M. Bell, Grazyna Fraczkiewicz, Judy Strickland, M. Bartels, Kim L. R. Brouwer, Annie Lumen, Alicia Paini, Alice Ke, Nisha S. Sipes, Scott G. Lynn, Paul S. Price, Stephen S. Ferguson, Catherine S. Sprankle, Xiaoqing Chang, Annie M. Jarabek, David G. Allen, Nicole Kleinstreuer, Warren Casey, Caroline Ring, John F. Wambaugh, John A. Troutman, Barbara A. Wetmore, and Neepa Choksi
- Subjects
Animal Use Alternatives ,0301 basic medicine ,Prioritization ,Computer science ,Extrapolation ,Expert Systems ,Guidelines as Topic ,Context (language use) ,Computational toxicology ,Toxicology ,Models, Biological ,Article ,03 medical and health sciences ,Human health ,Chemical safety ,In vivo ,Toxicity Tests ,Animals ,Humans ,Computer Simulation ,United States Environmental Protection Agency ,Throughput (business) ,Decision Making, Computer-Assisted ,Decision Making, Organizational ,Health Priorities ,Computational Biology ,Chemical Safety ,General Medicine ,United States ,High-Throughput Screening Assays ,030104 developmental biology ,Risk analysis (engineering) ,United States Dept. of Health and Human Services ,National Institute of Environmental Health Sciences (U.S.) - Abstract
In vitro chemical safety testing methods offer the potential for efficient and economical tools to provide relevant assessments of human health risk. To realize this potential, methods are needed to relate in vitro effects to in vivo responses, i.e., in vitro to in vivo extrapolation (IVIVE). Currently available IVIVE approaches need to be refined before they can be utilized for regulatory decision-making. To explore the capabilities and limitations of IVIVE within this context, the U.S. Environmental Protection Agency Office of Research and Development and the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods co-organized a workshop and webinar series. Here, we integrate content from the webinars and workshop to discuss activities and resources that would promote inclusion of IVIVE in regulatory decision-making. We discuss properties of models that successfully generate predictions of in vivo doses from effective in vitro concentration, including the experimental systems that provide input parameters for these models, areas of success, and areas for improvement to reduce model uncertainty. Finally, we provide case studies on the uses of IVIVE in safety assessments, which highlight the respective differences, information requirements, and outcomes across various approaches when applied for decision-making.
- Published
- 2018
- Full Text
- View/download PDF
6. Application of open-source PBPK models in rat-to-human pharmacokinetic extrapolation of oral nicotine exposure
- Author
-
Jingjie Zhang, Xiaoqing Chang, K. Monica Lee, Shannon M. Bell, and David E. Hines
- Subjects
Physiologically based pharmacokinetic modelling ,Chemistry ,Health, Toxicology and Mutagenesis ,Cmax ,Absorption (skin) ,Buccal administration ,Pharmacology ,Toxicology ,Computer Science Applications ,Nicotine ,Animal data ,Pharmacokinetics ,In vivo ,medicine ,medicine.drug - Abstract
Physiologically Based Pharmacokinetic (PBPK) models are often developed using animal data and applied to predict chemical movement and concentration in humans. However, differences in physiology and exposure routes between animal experiments and human exposures may impact the predictions and interpretation of PBPK model results. Data are needed to parameterize PBPK models, potentially requiring chemical-specific adjustment to model inputs. Since data may be limited for a chemical or exposure of interest, in silico approaches such as chemical structure-based modeling can help fill the gaps. This case study assesses generalized, open-source PBPK models for interspecies kinetic extrapolation of nicotine using both in vivo data from a rat oral gavage study and in silico predictions. Nicotine is used as a data-rich example chemical because PK data are available from different exposure routes in both humans and animals. Rat nicotine plasma data were obtained after oral gavage dosing of nicotine (up to 8 mg/kg/day over 7 days) and used to develop human nicotine models for both oral ingestion and buccal (mouth tissue) absorption. As an open-source buccal tissue absorption model was not available, we mimicked a buccal exposure by modifying the open-source model for the intravenous exposure route. An 8 mg/kg/day human dose using oral ingestion and buccal absorption resulted in an approximately 2- and 4-fold higher predicted maximum plasma concentration (Cmax) than an 8 mg/kg/day gavage exposure in rats, respectively, highlighting the impact of species and exposure route on model predictions. This study demonstrates that a generalized and open-source PBPK model can extrapolate the plasma kinetic profiles of nicotine between species using in vivo and in silico data after accounting for differences in exposure routes. In silico-informed model parameterizations provided similar results to rat in vivo-based parameterizations, highlighting the potential use of in silico approaches when data are limited.
- Published
- 2021
- Full Text
- View/download PDF
7. Application of new approach methodologies: ICE tools to support chemical evaluations
- Author
-
Amber B. Daniel, Xiaoqing Chang, David E. Hines, Jaleh Abedini, John P. Rooney, Catherine S. Sprankle, Kamel Mansouri, Bethany Cook, Neepa Choksi, Warren Casey, Agnes L. Karmaus, Shannon M. Bell, Eric McAfee, Jason Phillips, David Allen, and Nicole Kleinstreuer
- Subjects
Physiologically based pharmacokinetic modelling ,Reference data ,Toxicity data ,Computer science ,Health, Toxicology and Mutagenesis ,In vitro toxicology ,Biochemical engineering ,Toxicology ,Computer Science Applications - Abstract
New approach methodologies (NAMs) for toxicological applications such as in vitro assays and in silico models generate data that can be useful for assessing potential health impacts of chemicals. The National Toxicology Program’s (NTP’s) Integrated Chemical Environment (ICE; https://ice.ntp.niehs.nih.gov/ ) provides user-friendly access to NAM data and tools to explore and contextualize chemical bioactivity and molecular properties. ICE contains curated in vivo and in vitro toxicity testing data and experimental physicochemical property data gathered from different literature sources. ICE also contains computationally generated toxicity data and physicochemical parameter predictions. ICE provides interactive computational tools that characterize, analyze, and predict bioactivity for user-defined chemicals. ICE Search allows users to select and merge data sets for lists of chemicals and mixtures, yielding summary-level information, curated reference data, and bioactivity details mapped to mechanistic targets and modes of action. With the Curve Surfer tool, the user can explore concentration–response relationships of curated high-throughput screening assays. The Physiologically Based Pharmacokinetics (PBPK) tool predicts tissue-level concentrations resulting from in vivo doses, while the In Vitro–In Vivo Extrapolation (IVIVE) tool translates in vitro activity concentrations to equivalent in vivo dose estimates. The Chemical Characterization tool displays distributions of physicochemical properties, bioactivity- and structure-based projections, and consumer product use information. Chemical Quest, the newest ICE tool, allows users to search for structurally similar chemicals to a target chemical or substructure from within the extensive ICE database. Retrieved information on target chemicals and those with similar structures can then be used to query other ICE tools and datasets, greatly expanding data available to address the user’s question. ICE links to other NTP and U.S. Environmental Protection Agency data sources, expanding ICE’s capacity to examine chemicals based on physicochemical properties, bioactivity, and product use categories.
- Published
- 2021
- Full Text
- View/download PDF
8. An integrated chemical environment with tools for chemical safety testing
- Author
-
Jaleh Abedini, Nicole Kleinstreuer, Ruhi Rai, Eric McAfee, Agnes L. Karmaus, Warren Casey, Patricia Ceger, Isabel Lea, Arpit Tandon, John P. Rooney, Xiaoqing Chang, Catherine S. Sprankle, Kamel Mansouri, David Allen, Shannon M. Bell, Jason Phillips, and Bethany Cook
- Subjects
0301 basic medicine ,Databases, Factual ,Computer science ,Knowledge organization ,Toxicology ,Animal Testing Alternatives ,Risk Assessment ,Article ,03 medical and health sciences ,0302 clinical medicine ,Chemical safety ,Toxicity Tests ,Animals ,Humans ,Interpretability ,Data space ,General Medicine ,Chemical Safety ,Data science ,High-Throughput Screening Assays ,R package ,030104 developmental biology ,Regulatory toxicology ,030220 oncology & carcinogenesis ,Data interoperability ,Controlled Terminology - Abstract
Moving towards species-relevant chemical safety assessments and away from animal testing requires access to reliable data to develop and build confidence in new approaches. The Integrated Chemical Environment (ICE) provides tools and curated data centered around chemical safety assessment. This article describes updates to ICE, including improved accessibility and interpretability of in vitro data via mechanistic target mapping and enhanced interactive tools for in vitro to in vivo extrapolation (IVIVE). Mapping of in vitro assay targets to toxicity endpoints of regulatory importance uses literature-based mode-of-action information and controlled terminology from existing knowledge organization systems to support data interoperability with external resources. The most recent ICE update includes Tox21 high-throughput screening data curated using analytical chemistry data and assay-specific parameters to eliminate potential artifacts or unreliable activity. Also included are physicochemical/ADME parameters for over 800,000 chemicals predicted by quantitative structure-activity relationship models. These parameters are used by the new ICE IVIVE tool in combination with the U.S. Environmental Protection Agency’s httk R package to estimate in vivo exposures corresponding to in vitro bioactivity concentrations from stored or user-defined assay data. These new ICE features allow users to explore the applications of an expanded data space and facilitate building confidence in non-animal approaches.
- Published
- 2020
9. Big Data Integration and Inference
- Author
-
Lyle D. Burgoon, Edward J. Perkins, Karen H. Watanabe-Sailor, Hristo Aladjov, Stephen W. Edwards, Anthony L. Schroeder, Clemens Wittwehr, Natàlia Garcia-Reyero, Shannon M. Bell, Rory B. Conolly, Michael L. Mayo, and Wan-Yun Cheng
- Subjects
Small data ,Knowledge base ,business.industry ,Computer science ,Aggregate (data warehouse) ,Big data ,Inference ,business ,computer.software_genre ,Data science ,Automation ,computer ,Data integration - Abstract
Toxicology data are generated on large scales by toxicogenomic studies and high-throughput screening (HTS) programmes, and on smaller scales by traditional methods. Both big and small data have value for elucidating toxicological mechanisms and pathways that are perturbed by chemical stressors. In addition, years of investigations comprise a wealth of knowledge as reported in the literature that is also used to interpret new data, though knowledge is not often captured in traditional databases. With the big data era, computer automation to analyse and interpret datasets is needed, which requires aggregation of data and knowledge from all available sources. This chapter reviews ongoing efforts to aggregate toxicological knowledge in a knowledge base, based on the Adverse Outcome Pathways framework, and provides examples of data integration and inferential analysis for use in (predictive) toxicology.
- Published
- 2019
- Full Text
- View/download PDF
10. Exploring in vitro to in vivo extrapolation for exposure and health impacts of e-cigarette flavor mixtures
- Author
-
Xiaoqing Chang, Jaleh Abedini, Shannon M. Bell, and K. Monica Lee
- Subjects
0301 basic medicine ,Cell Survival ,Extrapolation ,Computational biology ,Electronic Nicotine Delivery Systems ,Hazard analysis ,Toxicology ,Models, Biological ,Risk Assessment ,03 medical and health sciences ,0302 clinical medicine ,Test article ,In vivo ,Humans ,Safety testing ,Aerosols ,Inhalation Exposure ,Chemistry ,General Medicine ,Plasma levels ,In vitro ,Acute toxicity ,High-Throughput Screening Assays ,Flavoring Agents ,030104 developmental biology ,030220 oncology & carcinogenesis ,Biological Assay - Abstract
In vitro to in vivo extrapolation (IVIVE) leverages in vitro biological activities to predict corresponding in vivo exposures, therefore potentially reducing the need for animal safety testing that are traditionally performed to support the hazard and risk assessment. Interpretation of IVIVE predictions are affected by various factors including the model type, exposure route and kinetic assumptions for the test article, and choice of in vitro assay(s) that are relevant to clinical outcomes. Exposure scenarios are further complicated for mixtures where the in vitro activity may stem from one or more components in the mixture. In this study, we used electronic cigarette (EC) aerosols, a complex mixture, to explore impacts of these factors on the use of IVIVE in hazard identification, using open-source pharmacokinetic models of varying complexity and publicly available data. Results suggest in vitro assay selection has a greater impact on exposure estimates than modeling approaches. Using cytotoxicity assays, high exposure estimates (>1000 EC cartridges (pods) or > 700 mL EC liquid per day) would be needed to obtain the in vivo plasma levels that are corresponding to in vitro assay data, suggesting acute toxicity would be unlikely in typical usage scenarios. When mechanistic (Tox21) assays were used, the exposure estimates were much lower for the low end, but the range of exposure estimate became wider across modeling approaches. These proof-of-concept results highlight challenges and complexities in IVIVE for mixtures.
- Published
- 2021
- Full Text
- View/download PDF
11. MIPHENO: data normalization for high throughput metabolite analysis.
- Author
-
Shannon M. Bell, Lyle D. Burgoon, and Robert L. Last
- Published
- 2012
- Full Text
- View/download PDF
12. Prediction of skin sensitization potency using machine learning approaches
- Author
-
Abigail Jacobs, David M. Lehmann, Michael Paris, Nicole Kleinstreuer, Warren Casey, Shannon M. Bell, Judy Strickland, Joanna Matheson, Qingda Zang, and David Allen
- Subjects
0301 basic medicine ,Activation test ,Local lymph node assay ,business.industry ,Skin sensitization ,Human cell line ,010501 environmental sciences ,Toxicology ,Machine learning ,computer.software_genre ,01 natural sciences ,Support vector machine ,03 medical and health sciences ,Animal data ,030104 developmental biology ,Potency ,Medicine ,Artificial intelligence ,business ,computer ,0105 earth and related environmental sciences ,Animal use - Abstract
The replacement of animal use in testing for regulatory classification of skin sensitizers is a priority for US federal agencies that use data from such testing. Machine learning models that classify substances as sensitizers or non-sensitizers without using animal data have been developed and evaluated. Because some regulatory agencies require that sensitizers be further classified into potency categories, we developed statistical models to predict skin sensitization potency for murine local lymph node assay (LLNA) and human outcomes. Input variables for our models included six physicochemical properties and data from three non-animal test methods: direct peptide reactivity assay; human cell line activation test; and KeratinoSens™ assay. Models were built to predict three potency categories using four machine learning approaches and were validated using external test sets and leave-one-out cross-validation. A one-tiered strategy modeled all three categories of response together while a two-tiered strategy modeled sensitizer/non-sensitizer responses and then classified the sensitizers as strong or weak sensitizers. The two-tiered model using the support vector machine with all assay and physicochemical data inputs provided the best performance, yielding accuracy of 88% for prediction of LLNA outcomes (120 substances) and 81% for prediction of human test outcomes (87 substances). The best one-tiered model predicted LLNA outcomes with 78% accuracy and human outcomes with 75% accuracy. By comparison, the LLNA predicts human potency categories with 69% accuracy (60 of 87 substances correctly categorized). These results suggest that computational models using non-animal methods may provide valuable information for assessing skin sensitization potency. Copyright © 2017 John Wiley & Sons, Ltd.
- Published
- 2017
- Full Text
- View/download PDF
13. Integrating Publicly Available Data to Generate Computationally Predicted Adverse Outcome Pathways for Fatty Liver
- Author
-
Charles E. Wood, Michelle M. Angrish, Shannon M. Bell, and Stephen W. Edwards
- Subjects
0301 basic medicine ,Biomedical Research ,Databases, Factual ,Process (engineering) ,010501 environmental sciences ,Biology ,Ecotoxicology ,Toxicology ,Bioinformatics ,Machine learning ,computer.software_genre ,Risk Assessment ,01 natural sciences ,03 medical and health sciences ,Resource (project management) ,Adverse Outcome Pathway ,Animals ,Humans ,Computer Simulation ,0105 earth and related environmental sciences ,business.industry ,High-Throughput Screening Assays ,Fatty Liver ,Identification (information) ,030104 developmental biology ,Workflow ,Graph (abstract data type) ,Environmental Pollutants ,Artificial intelligence ,Toxicogenomics ,business ,computer ,Biological network - Abstract
Newin vitrotesting strategies make it possible to design testing batteries for large numbers of environmental chemicals. Full utilization of the results requires knowledge of the underlying biological networks and the adverse outcome pathways (AOPs) that describe the route from early molecular perturbations to an adverse outcome. Curation of a formal AOP is a time-intensive process and a rate-limiting step to designing these test batteries. Here, we describe a method for integrating publicly available data in order to generate computationally predicted AOP (cpAOP) scaffolds, which can be leveraged by domain experts to shorten the time for formal AOP development. A network-based workflow was used to facilitate the integration of multiple data types to generate cpAOPs. Edges between graph entities were identified through direct experimental or literature information, or computationally inferred using frequent itemset mining. Data from the TG-GATEs and ToxCast programs were used to channel large-scale toxicogenomics information into a cpAOP network (cpAOPnet) of over 20 000 relationships describing connections between chemical treatments, phenotypes, and perturbed pathways as measured by differential gene expression and high-throughput screening targets. The resulting fatty liver cpAOPnet is available as a resource to the community. Subnetworks of cpAOPs for a reference chemical (carbon tetrachloride, CCl4) and outcome (fatty liver) were compared with published mechanistic descriptions. In both cases, the computational approaches approximated the manually curated AOPs. The cpAOPnet can be used for accelerating expert-curated AOP development and to identify pathway targets that lack genomic markers or high-throughput screening tests. It can also facilitate identification of key events for designing test batteries and for classification and grouping of chemicals for follow up testing.
- Published
- 2016
- Full Text
- View/download PDF
14. Accelerating Adverse Outcome Pathway Development Using Publicly Available Data Sources
- Author
-
Holly M. Mortensen, Stephen W. Edwards, Mark D. Nelms, Noffisat Oki, and Shannon M. Bell
- Subjects
0301 basic medicine ,Information management ,Information Management ,Process (engineering) ,Computer science ,Health, Toxicology and Mutagenesis ,media_common.quotation_subject ,Pharmacology toxicology ,Management, Monitoring, Policy and Law ,Ecotoxicology ,Bioinformatics ,Risk Assessment ,03 medical and health sciences ,Toxicity Tests ,Adverse Outcome Pathway ,Humans ,Computer Simulation ,Quality (business) ,Nature and Landscape Conservation ,media_common ,business.industry ,Public Health, Environmental and Occupational Health ,Data science ,030104 developmental biology ,Knowledge base ,Key (cryptography) ,business - Abstract
The adverse outcome pathway (AOP) concept links molecular perturbations with organism and population-level outcomes to support high-throughput toxicity (HTT) testing. International efforts are underway to define AOPs and store the information supporting these AOPs in a central knowledge base; however, this process is currently labor-intensive and time-consuming. Publicly available data sources provide a wealth of information that could be used to define computationally predicted AOPs (cpAOPs), which could serve as a basis for creating expert-derived AOPs in a much more efficient way. Computational tools for mining large datasets provide the means for extracting and organizing the information captured in these public data sources. Using cpAOPs as a starting point for expert-derived AOPs should accelerate AOP development. Coupling this with tools to coordinate and facilitate the expert development efforts will increase the number and quality of AOPs produced, which should play a key role in advancing the adoption of HTT testing, thereby reducing the use of animals in toxicity testing and greatly increasing the number of chemicals that can be tested.
- Published
- 2016
- Full Text
- View/download PDF
15. An Integrated Chemical Environment to Support 21st-Century Toxicology
- Author
-
Warren Casey, Andy Shapiro, Arpit Tandon, Jason Phillips, Alexander Sedykh, Nicole Kleinstreuer, David Allen, Shannon M. Bell, Stephen Q. Morefield, Catherine S. Sprankle, Ruchir R. Shah, and Elizabeth A. Maull
- Subjects
0301 basic medicine ,Hazard (logic) ,Internet ,Data collection ,Databases, Factual ,business.industry ,Computer science ,Health, Toxicology and Mutagenesis ,Data Collection ,Public Health, Environmental and Occupational Health ,Brief Communication ,Toxicology ,Set (abstract data type) ,03 medical and health sciences ,Reference data ,030104 developmental biology ,Workflow ,The Internet ,Web resource ,business ,Test data - Abstract
SUMMARY: Access to high-quality reference data is essential for the development, validation, and implementation of in vitro and in silico approaches that reduce and replace the use of animals in toxicity testing. Currently, these data must often be pooled from a variety of disparate sources to efficiently link a set of assay responses and model predictions to an outcome or hazard classification. To provide a central access point for these purposes, the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods developed the Integrated Chemical Environment (ICE) web resource. The ICE data integrator allows users to retrieve and combine data sets and to develop hypotheses through data exploration. Open-source computational workflows and models will be available for download and application to local data. ICE currently includes curated in vivo test data, reference chemical information, in vitro assay data (including Tox21TM/ToxCast™ high-throughput screening data), and in silico model predictions. Users can query these data collections focusing on end points of interest such as acute systemic toxicity, endocrine disruption, skin sensitization, and many others. ICE is publicly accessible at https://ice.ntp.niehs.nih.gov. https://doi.org/10.1289/EHP1759.
- Published
- 2017
16. Linking Environmental Exposure to Toxicity
- Author
-
Jeremy A. Leonard, Lyle D. Burgoon, Yu-Mei Tan, Stephen W. Edwards, Mark D. Nelms, Shannon M. Bell, and Noffisat Oki
- Subjects
Engineering ,Target site ,Risk analysis (engineering) ,business.industry ,Adverse outcomes ,Stressor ,Toxicity ,Adverse Outcome Pathway ,Screening method ,Environmental exposure ,business ,Reliability engineering - Abstract
As the number of chemicals and environmental toxicants in commerce continue to increase, so does the need to understand the links between exposure to these stressors and any potential toxic reactions. Assessing the impact of these stressors on public health as well as our environment requires an understanding of the underlying mechanistic processes connecting their introduction into the environment to the associated adverse outcomes. Traditional in vivo methods of toxicity testing have become too costly and inefficient. In recent times, in vitro high-throughput toxicity screening methods have been introduced to reduce the burden of in vivo testing and keep pace with the ever increasing number of required tests. The adverse outcome pathway (AOP) concept has been adopted by many in the toxicology community as a framework for linking the biological events that occur from the point of contact with these stressors and the resulting adverse outcome. This provides a mechanistic framework for understanding the potential impacts of perturbations that are measured via in vitro testing strategies. The aggregate exposure pathway (AEP) has been proposed as a companion framework to the AOP. The goal of the AEP is to describe the path the introduction of the stressor into the environment at its source to a target site within an individual that is comparable with the concentrations in the in vitro toxicity tests. Together, these frameworks provide a comprehensive view of the source to adverse outcome continuum. Standardizing our representation of the mechanistic information in this way allows for increased interoperability for computational models describing different parts of the system. It also aids in translating new research in exposure science and toxicology for risk assessors and decision makers when assessing the impact of specific stressors on endpoints of regulatory significance.
- Published
- 2017
- Full Text
- View/download PDF
17. Large-Scale Reverse Genetics in Arabidopsis: Case Studies from the Chloroplast 2010 Project
- Author
-
Imad Ajjawi, Yan Lu, Linda J. Savage, Shannon M. Bell, and Robert L. Last
- Subjects
Chlorophyll ,DNA, Bacterial ,Chloroplasts ,DNA, Plant ,Physiology ,Mutant ,Arabidopsis ,Mutagenesis (molecular biology technique) ,Plant Science ,Fluorescence ,Gene Expression Regulation, Plant ,Acyl Carrier Protein ,Genetics ,Arabidopsis thaliana ,Gene ,biology ,Arabidopsis Proteins ,Fatty Acids ,Genomics ,Plants, Genetically Modified ,biology.organism_classification ,Phenotype ,Reverse genetics ,Forward genetics ,Mutagenesis, Insertional ,Focus Issue on Plant Systems Biology ,Mutation - Abstract
Traditionally, phenotype-driven forward genetic plant mutant studies have been among the most successful approaches to revealing the roles of genes and their products and elucidating biochemical, developmental, and signaling pathways. A limitation is that it is time consuming, and sometimes technically challenging, to discover the gene responsible for a phenotype by map-based cloning or discovery of the insertion element. Reverse genetics is also an excellent way to associate genes with phenotypes, although an absence of detectable phenotypes often results when screening a small number of mutants with a limited range of phenotypic assays. The Arabidopsis Chloroplast 2010 Project (www.plastid.msu.edu) seeks synergy between forward and reverse genetics by screening thousands of sequence-indexed Arabidopsis (Arabidopsis thaliana) T-DNA insertion mutants for a diverse set of phenotypes. Results from this project are discussed that highlight the strengths and limitations of the approach. We describe the discovery of altered fatty acid desaturation phenotypes associated with mutants of At1g10310, previously described as a pterin aldehyde reductase in folate metabolism. Data are presented to show that growth, fatty acid, and chlorophyll fluorescence defects previously associated with antisense inhibition of synthesis of the family of acyl carrier proteins can be attributed to a single gene insertion in Acyl Carrier Protein4 (At4g25050). A variety of cautionary examples associated with the use of sequence-indexed T-DNA mutants are described, including the need to genotype all lines chosen for analysis (even when they number in the thousands) and the presence of tagged and untagged secondary mutations that can lead to the observed phenotypes.
- Published
- 2009
- Full Text
- View/download PDF
18. Systematic Omics Analysis Review (SOAR) tool to support risk assessment
- Author
-
Ila Cote, Natàlia Garcia-Reyero, Lyle D. Burgoon, Emma R. McConnell, Rong-Lin Wang, Edward J. Perkins, Shannon M. Bell, and Ping Gong
- Subjects
Research Validity ,Microarrays ,Science Policy ,media_common.quotation_subject ,Predictive Toxicology ,lcsh:Medicine ,Biological Data Management ,Toxicology ,Research and Analysis Methods ,Ecotoxicology ,Bioinformatics ,Risk Assessment ,Toxicogenetics ,Field (computer science) ,Surveys and Questionnaires ,Medicine and Health Sciences ,Animals ,Humans ,Medicine ,Public and Occupational Health ,Quality (business) ,Soar ,lcsh:Science ,Research Integrity ,Oligonucleotide Array Sequence Analysis ,media_common ,Multidisciplinary ,business.industry ,Minimum information about a microarray experiment ,Systems Biology ,Gene Expression Profiling ,lcsh:R ,Biology and Life Sciences ,Computational Biology ,Research Assessment ,Reference Standards ,Omics ,Data science ,Test (assessment) ,Health Care ,Review Literature as Topic ,Bioassays and Physiological Analysis ,Systematic review ,Research Reporting Guidelines ,lcsh:Q ,business ,Risk assessment ,Environmental Health ,Research Article - Abstract
Environmental health risk assessors are challenged to understand and incorporate new data streams as the field of toxicology continues to adopt new molecular and systems biology technologies. Systematic screening reviews can help risk assessors and assessment teams determine which studies to consider for inclusion in a human health assessment. A tool for systematic reviews should be standardized and transparent in order to consistently determine which studies meet minimum quality criteria prior to performing in-depth analyses of the data. The Systematic Omics Analysis Review (SOAR) tool is focused on assisting risk assessment support teams in performing systematic reviews of transcriptomic studies. SOAR is a spreadsheet tool of 35 objective questions developed by domain experts, focused on transcriptomic microarray studies, and including four main topics: test system, test substance, experimental design, and microarray data. The tool will be used as a guide to identify studies that meet basic published quality criteria, such as those defined by the Minimum Information About a Microarray Experiment standard and the Toxicological Data Reliability Assessment Tool. Seven scientists were recruited to test the tool by using it to independently rate 15 published manuscripts that study chemical exposures with microarrays. Using their feedback, questions were weighted based on importance of the information and a suitability cutoff was set for each of the four topic sections. The final validation resulted in 100% agreement between the users on four separate manuscripts, showing that the SOAR tool may be used to facilitate the standardized and transparent screening of microarray literature for environmental human health risk assessment.
- Published
- 2014
19. Molecular genetic studies and delineation of the oculocutaneous albinism phenotype in the Pakistani population
- Author
-
Shannon M Bell, Thomas J. Jaworek, Rehan S. Shaikh, Tasleem Kausar, Muhmmmad Ali, Muhammad Imran Maqsood, Furhan Iqbal, Nabeela Tariq, Saima Riazuddin, Asma Sohail, Zubair M. Ahmed, and Shafqat Rasool
- Subjects
Male ,Candidate gene ,lcsh:Medicine ,medicine.disease_cause ,Polymerase Chain Reaction ,0302 clinical medicine ,Pakistan ,Genetics(clinical) ,Pharmacology (medical) ,TYRP1 ,Child ,Cells, Cultured ,Genetics (clinical) ,Hypopigmentation ,Medicine(all) ,Genetics ,OCA2 ,0303 health sciences ,Mutation ,Exons ,General Medicine ,Middle Aged ,Oculocutaneous albinism ,Pedigree ,3. Good health ,Phenotype ,Albinism, Oculocutaneous ,030220 oncology & carcinogenesis ,Albinism ,Melanocytes ,Female ,Adult ,SLC45A2 ,Adolescent ,Biology ,Young Adult ,03 medical and health sciences ,Oculocutaneous Albinism ,medicine ,Humans ,SLC24A5 ,Fluorescent Dyes ,030304 developmental biology ,TYR ,Genetic heterogeneity ,Research ,lcsh:R ,medicine.disease ,eye diseases ,biology.protein ,Exon-trapping - Abstract
Background Oculocutaneous albinism (OCA) is caused by a group of genetically heterogeneous inherited defects that result in the loss of pigmentation in the eyes, skin and hair. Mutations in the TYR, OCA2, TYRP1 and SLC45A2 genes have been shown to cause isolated OCA. No comprehensive analysis has been conducted to study the spectrum of OCA alleles prevailing in Pakistani albino populations. Methods We enrolled 40 large Pakistani families and screened them for OCA genes and a candidate gene, SLC24A5. Protein function effects were evaluated using in silico prediction algorithms and ex vivo studies in human melanocytes. The effects of splice-site mutations were determined using an exon-trapping assay. Results Screening of the TYR gene revealed four known (p.Arg299His, p.Pro406Leu, p.Gly419Arg, p.Arg278*) and three novel mutations (p.Pro21Leu, p.Cys35Arg, p.Tyr411His) in ten families. Ex vivo studies revealed the retention of an EGFP-tagged mutant (p.Pro21Leu, p.Cys35Arg or p.Tyr411His) tyrosinase in the endoplasmic reticulum (ER) at 37°C, but a significant fraction of p.Cys35Arg and p.Tyr411His left the ER in cells grown at a permissive temperature (31°C). Three novel (p.Asp486Tyr, p.Leu527Arg, c.1045-15 T > G) and two known mutations (p.Pro743Leu, p.Ala787Thr) of OCA2 were found in fourteen families. Exon-trapping assays with a construct containing a novel c.1045-15 T > G mutation revealed an error in splicing. No mutation in TYRP1, SLC45A2, and SLC24A5 was found in the remaining 16 families. Clinical evaluation of the families segregating either TYR or OCA2 mutations showed nystagmus, photophobia, and loss of pigmentation in the skin or hair follicles. Most of the affected individuals had grayish-blue colored eyes. Conclusions Our results show that ten and fourteen families harbored mutations in the TYR and OCA2 genes, respectively. Our findings, along with the results of previous studies, indicate that the p.Cys35Arg, p.Arg278* and p.Gly419Arg alleles of TYR and the p.Asp486Tyr and c.1045-15 T > G alleles of OCA2 are the most common causes of OCA in Pakistani families. To the best of our knowledge, this study represents the first documentation of OCA2 alleles in the Pakistani population. A significant proportion of our cohort did not have mutations in known OCA genes. Overall, our study contributes to the development of genetic testing protocols and genetic counseling for OCA in Pakistani families.
- Published
- 2012
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.