8 results on '"Altintas, Ilkay"'
Search Results
2. A demonstration of modularity, reuse, reproducibility, portability and scalability for modeling and simulation of cardiac electrophysiology using Kepler Workflows.
- Author
-
Yang, Pei-Chi, Purawat, Shweta, Ieong, Pek U, Jeng, Mao-Tsuen, DeMarco, Kevin R, Vorobyov, Igor, McCulloch, Andrew D, Altintas, Ilkay, Amaro, Rommie E, and Clancy, Colleen E
- Subjects
Heart ,Humans ,Reproducibility of Results ,Models ,Cardiovascular ,Computer Simulation ,Workflow ,Proof of Concept Study ,Models ,Cardiovascular ,Mathematical Sciences ,Biological Sciences ,Information and Computing Sciences ,Bioinformatics - Abstract
Multi-scale computational modeling is a major branch of computational biology as evidenced by the US federal interagency Multi-Scale Modeling Consortium and major international projects. It invariably involves specific and detailed sequences of data analysis and simulation, often with multiple tools and datasets, and the community recognizes improved modularity, reuse, reproducibility, portability and scalability as critical unmet needs in this area. Scientific workflows are a well-recognized strategy for addressing these needs in scientific computing. While there are good examples if the use of scientific workflows in bioinformatics, medical informatics, biomedical imaging and data analysis, there are fewer examples in multi-scale computational modeling in general and cardiac electrophysiology in particular. Cardiac electrophysiology simulation is a mature area of multi-scale computational biology that serves as an excellent use case for developing and testing new scientific workflows. In this article, we develop, describe and test a computational workflow that serves as a proof of concept of a platform for the robust integration and implementation of a reusable and reproducible multi-scale cardiac cell and tissue model that is expandable, modular and portable. The workflow described leverages Python and Kepler-Python actor for plotting and pre/post-processing. During all stages of the workflow design, we rely on freely available open-source tools, to make our workflow freely usable by scientists.
- Published
- 2019
3. Perspectives on automated composition of workflows in the life sciences:[version 1; peer review: 2 approved]
- Author
-
Lamprecht, Anna-Lena, Palmblad, Magnus, Ison, Jon, Schwämmle, Veit, Manir, Mohammad Sadnan Al, Altintas, Ilkay, Amor, Ammar Ben Hadj, Capella-Gutierrez, Salvador, Charonyktakis, Paulos, Crusoe, Michael R., Gil, Yolanda, Goble, Carole, Griffin, Timothy J., Groth, Paul, Ienasescu, Hans, Jagtap, Pratik, Kalaš, Matúš, Kasalica, Vedran, Khanteymoori, Alireza, Kuhn, Tobias, Mei, Hailiang, Ménager, Hervé, Möller, Steffen, Robert, Vincent, Soiland-Reyes, Stian, Stevens, Robert, Szaniszlo, Szoke, Verberne, Suzan, Verhoeven, Aswin, and Wolstencroft, Katherine
- Subjects
life sciences ,semantic domain modelling ,computational pipelines ,workflow benchmarking ,scientific workflows ,bioinformatics ,automated workflow composition - Abstract
Scientific data analyses often combine several computational tools in automated pipelines, or workflows. Thousands of such workflows have been used in the life sciences, though their composition has remained a cumbersome manual process due to a lack of standards for annotation, assembly, and implementation. Recent technological advances have returned the long-standing vision of automated workflow composition into focus.This article summarizes a recent Lorentz Center workshop dedicated to automated composition of workflows in the life sciences. We survey previous initiatives to automate the composition process, and discuss the current state of the art and future perspectives. We start by drawing the “big picture” of the scientific workflow development life cycle, before surveying and discussing current methods, technologies and practices for semantic domain modelling, automation in workflow development, and workflow assessment. Finally, we derive a roadmap of individual and community-based actions to work toward the vision of automated workflow development in the forthcoming years.A central outcome of the workshop is a general description of the workflow life cycle in six stages: 1) scientific question or hypothesis, 2) conceptual workflow, 3) abstract workflow, 4) concrete workflow, 5) production workflow, and 6) scientific results. The transitions between stages are facilitated by diverse tools and methods, usually incorporating domain knowledge in some form. Formal semantic domain modelling is hard and often a bottleneck for the application of semantic technologies. However, life science communities have made considerable progress here in recent years and are continuously improving, renewing interest in the application of semantic technologies for workflow exploration, composition and instantiation. Combined with systematic benchmarking with reference data and large-scale deployment of production-stage workflows, such technologies enable a more systematic process of workflow development than we know today. We believe that this can lead to more robust, reusable, and sustainable workflows in the future
- Published
- 2021
4. Kepler + CometCloud: Dynamic Scientific Workflow Execution on Federated Cloud Resources.
- Author
-
Wang, Jianwu, Abdelbaky, Moustafa, Diaz-Montes, Javier, Purawat, Shweta, Parashar, Manish, and Altintas, Ilkay
- Subjects
KEPLER problem ,CLOUD computing ,WORKFLOW ,BIOINFORMATICS ,INTEGRALS - Abstract
As more and more public and private Cloud resources are becoming available, it is common for a user to have access to multiple Cloud resources at the same time. Cloud federation dynamically aggregates multiple Cloud resources into a federated one. This paper explores how to build and run scientific workflows on top of a federated Cloud by integrating Kepler scientific workflow platform with CometCloud platform. Our integration can leverage capabilities of the two plat- forms: 1) dynamic resource federation, provisioning and allocation from CometCloud; 2) Easy workflow composition from Kepler; 3) Dynamic workflow scheduling and execution from the integration. We apply our integration to a bioinformatics workflow with three Cloud resources to evaluate its capabilities. We also discuss possible future directions from the integration. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
5. MAAMD: a workflow to standardize meta-analyses and comparison of affymetrix microarray data.
- Author
-
Zhuohui Gan, Jianwu Wang, Salomonis, Nathan, Stowe, Jennifer C., Haddad, Gabriel G., McCulloch, Andrew D., Altintas, Ilkay, and Zambon, Alexander C.
- Subjects
EMPLOYEES' workload ,BIOINFORMATICS ,MEDICAL informatics ,DROSOPHILA ,GENETIC regulation - Abstract
Background: Mandatory deposit of raw microarray data files for public access, prior to study publication, provides significant opportunities to conduct new bioinformatics analyses within and across multiple datasets. Analysis of raw microarray data files (e.g. Affymetrix CEL files) can be time consuming, complex, and requires fundamental computational and bioinformatics skills. The development of analytical workflows to automate these tasks simplifies the processing of, improves the efficiency of, and serves to standardize multiple and sequential analyses. Once installed, workflows facilitate the tedious steps required to run rapid intra- and inter-dataset comparisons. Results: We developed a workflow to facilitate and standardize Meta-Analysis of Affymetrix Microarray Data analysis (MAAMD) in Kepler. Two freely available stand-alone software tools, R and AltAnalyze were embedded in MAAMD. The inputs of MAAMD are user-editable csv files, which contain sample information and parameters describing the locations of input files and required tools. MAAMD was tested by analyzing 4 different GEO datasets from mice and drosophila. MAAMD automates data downloading, data organization, data quality control assesment, differential gene expression analysis, clustering analysis, pathway visualization, gene-set enrichment analysis, and cross-species orthologous-gene comparisons. MAAMD was utilized to identify gene orthologues responding to hypoxia or hyperoxia in both mice and drosophila. The entire set of analyses for 4 datasets (34 total microarrays) finished in ~ one hour. Conclusions: MAAMD saves time, minimizes the required computer skills, and offers a standardized procedure for users to analyze microarray datasets and make new intra- and inter-dataset comparisons. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
6. Using Kepler for Tool Integration in Microarray Analysis Workflows.
- Author
-
Gan, Zhuohui, Stowe, Jennifer C., Altintas, Ilkay, McCulloch, Andrew D., and Zambon, Alexander C.
- Subjects
MICROARRAY technology ,WORKFLOW management ,BIOINFORMATICS ,DATA analysis ,INTEGRATED software ,GENOMICS - Abstract
Abstract: Increasing numbers of genomic technologies are leading to massive amounts of genomic data, all of which requires complex analysis. More and more bioinformatics analysis tools are being developed by scientist to simplify these analyses. However, different pipelines have been developed using different software environments. This makes integrations of these diverse bioinformatics tools difficult. Kepler provides an open source environment to integrate these disparatepackages. Using Kepler, we integrated several external tools including Bioconductor packages, AltAnalyze, a python-based open source tool, and R-based comparison tool to build an automated workflow to meta-analyze both online and local microarray data. The automated workflow connects the integrated tools seamlessly, delivers data flow between the tools smoothly, and hence improves efficiency and accuracy of complex data analyses. Our workflow exemplifies the usage of Kepler as a scientific workflow platform for bioinformatics pipelines. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
7. Early Cloud Experiences with the Kepler Scientific Workflow System.
- Author
-
Wang, Jianwu and Altintas, Ilkay
- Subjects
CLOUD computing ,WORKFLOW ,INFORMATION resources ,MACHINE theory ,BIOINFORMATICS ,INFORMATION theory - Abstract
Abstract: With the increasing popularity of the Cloud computing, there are more and more requirements for scientific work–flows to utilize Cloud resources. In this paper, we present our preliminary work and experiences on enabling the interaction between the Kepler scientific workflow system and the Amazon Elastic Compute Cloud (EC2). A set of EC2 actors and Kepler Amazon Machine Images are introduced with the discussion on their different usage modes. Through two bioinformatics usecases, we demonstrate the capability of our work for both Cloud resource coordination and workflow execution on virtual Cloud resources. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
8. A Framework for Distributed Data-Parallel Execution in the Kepler Scientific Workflow System.
- Author
-
Wang, Jianwu, Crawl, Daniel, and Altintas, Ilkay
- Subjects
DISTRIBUTED algorithms ,PARALLEL algorithms ,DATA analysis ,WORKFLOW ,USER interfaces ,BIOINFORMATICS - Abstract
Abstract: Distributed Data-Parallel (DDP) patterns such as MapReduce have become increasingly popular as solutions to facilitate data-intensive applications, resulting in a number of systems supporting DDP workflows. Yet, applications or workflows built using these patterns are usually tightly-coupled with the underlying DDP execution engine they select. We present a framework for distributed data-parallel execution in the Kepler scientific workflow system that enables users to easily switch between different DDP execution engines. We describe a set of DDP actors based on DDP patterns and directors for DDP workflow executions within the presented framework. We demonstrate how DDP workflows can be easily composed in the Kepler graphic user interface through the reuse of these DDP actors and directors and how the generated DDP workflows can be executed in different distributed environments. Via a bioinformatics usecase, we discuss the usability of the proposed framework and validate its execution scalability. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.