17 results on '"Philip E. Davis"'
Search Results
2. The Exascale Framework for High Fidelity coupled Simulations (EFFIS): Enabling whole device modeling in fusion science
- Author
-
Shuangxi Zhang, Berk Geveci, Matthew Wolf, Kevin Huck, E. Suchyta, Cameron W. Smith, Ruonan Wang, Stephane Ethier, Philip E. Davis, Manish Parashar, Pradeep Subedi, Gabriele Merlo, Abolaji Adesoji, Norbert Podhorszki, Qing Liu, Todd Munson, Shirley Moore, Mark S. Shephard, C.S. Chang, Jeremy Logan, Jong Choi, Lipeng Wan, Kai Germaschewski, David Pugmire, Ian Foster, Scott Klasky, Kshitij Mehta, Chris Harris, and Julien Dominski
- Subjects
020203 distributed computing ,Fusion ,Computer science ,02 engineering and technology ,01 natural sciences ,Code coupling ,010305 fluids & plasmas ,Theoretical Computer Science ,Computational science ,High fidelity ,Workflow ,Hardware and Architecture ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Software - Abstract
We present the Exascale Framework for High Fidelity coupled Simulations (EFFIS), a workflow and code coupling framework developed as part of the Whole Device Modeling Application (WDMApp) in the Exascale Computing Project. EFFIS consists of a library, command line utilities, and a collection of run-time daemons. Together, these software products enable users to easily compose and execute workflows that include: strong or weak coupling, in situ (or offline) analysis/visualization/monitoring, command-and-control actions, remote dashboard integration, and more. We describe WDMApp physics coupling cases and computer science requirements that motivate the design of the EFFIS framework. Furthermore, we explain the essential enabling technology that EFFIS leverages: ADIOS for performant data movement, PerfStubs/TAU for performance monitoring, and an advanced COUPLER for transforming coupling data from its native format to the representation needed by another application. Finally, we demonstrate EFFIS using coupled multi-simulation WDMApp workflows and exemplify how the framework supports the project’s needs. We show that EFFIS and its associated services for data movement, visualization, and performance collection does not introduce appreciable overhead to the WDMApp workflow and that the resource-dominant application’s idle time while waiting for data is minimal.
- Published
- 2021
- Full Text
- View/download PDF
3. CoREC
- Author
-
Philip E. Davis, Shaohua Duan, Keita Teranishi, Pradeep Subedi, Manish Parashar, Hemanth Kolla, and Marc Gamell
- Subjects
020203 distributed computing ,Mean time between failures ,business.industry ,Computer science ,Distributed computing ,020207 software engineering ,02 engineering and technology ,Load balancing (computing) ,Storage efficiency ,Replication (computing) ,Computer Science Applications ,Data recovery ,Dataspaces ,Data access ,Computational Theory and Mathematics ,Hardware and Architecture ,Modeling and Simulation ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,business ,Erasure code ,Software - Abstract
The dramatic increase in the scale of current and planned high-end HPC systems is leading new challenges, such as the growing costs of data movement and IO, and the reduced mean time between failures (MTBF) of system components. In-situ workflows, i.e., executing the entire application workflows on the HPC system, have emerged as an attractive approach to address data-related challenges by moving computations closer to the data, and staging-based frameworks have been effectively used to support in-situ workflows at scale. However, the resilience of these staging-based solutions has not been addressed, and they remain susceptible to expensive data failures. Furthermore, naive use of data resilience techniques such as n-way replication and erasure codes can impact latency and/or result in significant storage overheads. In this article, we present CoREC, a scalable and resilient in-memory data staging runtime for large-scale in-situ workflows. CoREC uses a novel hybrid approach that combines dynamic replication with erasure coding based on data access patterns. It also leverages multiple levels of replications and erasure coding to support diverse data resiliency requirements. Furthermore, the article presents optimizations for load balancing and conflict-avoiding encoding, and a low overhead, lazy data recovery scheme. We have implemented the CoREC runtime and have deployed with the DataSpaces staging service on leadership class computing machines and present an experimental evaluation in the article. The experiments demonstrate that CoREC can tolerate in-memory data failures while maintaining low latency and sustaining high overall storage efficiency at large scales.
- Published
- 2020
- Full Text
- View/download PDF
4. Design and Performance of Kokkos Staging Space toward Scalable Resilient Application Couplings
- Author
-
Keita Teranishi, Francesco Rizzi, Nicolas Morales, Pradeep Subedi, Bo Zhang, Philip E. Davis, and Parashar Manish
- Subjects
Computer science ,Scalability ,Space (mathematics) ,Computational science - Published
- 2021
- Full Text
- View/download PDF
5. Transitioning from file-based HPC workflows to streaming data pipelines with openPMD and ADIOS2
- Author
-
Franz Poeschel, Juncheng E, William F. Godoy, Norbert Podhorszki, Scott Klasky, Greg Eisenhauer, Philip E. Davis, Lipeng Wan, Ana Gainaru, Junmin Gu, Fabian Koller, René Widera, Michael Bussmann, and Axel Huebl
- Subjects
high performance computing ,FOS: Computer and information sciences ,openPMD ,ADIOS ,Computer Science - Distributed, Parallel, and Cluster Computing ,big data ,RDMA ,streaming ,Distributed, Parallel, and Cluster Computing (cs.DC) - Abstract
This paper aims to create a transition path from file-based IO to streaming-based workflows for scientific applications in an HPC environment. By using the openPMP-api, traditional workflows limited by filesystem bottlenecks can be overcome and flexibly extended for in situ analysis. The openPMD-api is a library for the description of scientific data according to the Open Standard for Particle-Mesh Data (openPMD). Its approach towards recent challenges posed by hardware heterogeneity lies in the decoupling of data description in domain sciences, such as plasma physics simulations, from concrete implementations in hardware and IO. The streaming backend is provided by the ADIOS2 framework, developed at Oak Ridge National Laboratory. This paper surveys two openPMD-based loosely-coupled setups to demonstrate flexible applicability and to evaluate performance. In loose coupling, as opposed to tight coupling, two (or more) applications are executed separately, e.g. in individual MPI contexts, yet cooperate by exchanging data. This way, a streaming-based workflow allows for standalone codes instead of tightly-coupled plugins, using a unified streaming-aware API and leveraging high-speed communication infrastructure available in modern compute clusters for massive data exchange. We determine new challenges in resource allocation and in the need of strategies for a flexible data distribution, demonstrating their influence on efficiency and scaling on the Summit compute system. The presented setups show the potential for a more flexible use of compute resources brought by streaming IO as well as the ability to increase throughput by avoiding filesystem bottlenecks., 18 pages, 9 figures, SMC2021, supplementary material at https://zenodo.org/record/4906276
- Published
- 2021
6. Toward Resilient Heterogeneous Computing Workflow through Kokkos-DataSpaces Integration
- Author
-
Bo Zhang, Nicolas Morales, Keita Teranishi, Manish PArshar, and Philip E. Davis
- Subjects
Dataspaces ,Workflow ,Computer science ,Distributed computing ,Symmetric multiprocessor system - Published
- 2020
- Full Text
- View/download PDF
7. ADIOS 2: The Adaptable Input Output System. A framework for high-performance data management
- Author
-
Mark Kim, Seiji Tsutsumi, George Ostrouchov, James Kress, Keichi Takahashi, Lipeng Wan, Kesheng Wu, Norbert Podhorszki, Kshitij Mehta, Kai Germaschewski, Franz Poeschel, Scott Klasky, Ruonan Wang, Chuck Atkins, Jong Choi, Matthew Wolf, Qing Liu, David Pugmire, Jeremy Logan, William F. Godoy, Philip E. Davis, Manish Parashar, Junmin Gu, Nicholas Thompson, E. Suchyta, Kevin Huck, Greg Eisenhauer, Axel Huebl, and Tahsin Kurc
- Subjects
Staging ,Computer science ,Fortran ,Data management ,Scalable I/O ,computer.software_genre ,01 natural sciences ,Data science ,03 medical and health sciences ,Exascale computing ,Luster GPFS file systems ,0103 physical sciences ,010306 general physics ,MATLAB ,030304 developmental biology ,computer.programming_language ,lcsh:Computer software ,0303 health sciences ,Application programming interface ,business.industry ,Programming language ,In-situ ,Python (programming language) ,Supercomputer ,Computer Science Applications ,lcsh:QA76.75-76.765 ,Personal computer ,RDMA ,business ,High-performance computing (HPC) ,computer ,Software - Abstract
Author(s): Godoy, WF; Podhorszki, N; Wang, R; Atkins, C; Eisenhauer, G; Gu, J; Davis, P; Choi, J; Germaschewski, K; Huck, K; Huebl, A; Kim, M; Kress, J; Kurc, T; Liu, Q; Logan, J; Mehta, K; Ostrouchov, G; Parashar, M; Poeschel, F; Pugmire, D; Suchyta, E; Takahashi, K; Thompson, N; Tsutsumi, S; Wan, L; Wolf, M; Wu, K; Klasky, S | Abstract: We present ADIOS 2, the latest version of the Adaptable Input Output (I/O) System. ADIOS 2 addresses scientific data management needs ranging from scalable I/O in supercomputers, to data analysis in personal computer and cloud systems. Version 2 introduces a unified application programming interface (API) that enables seamless data movement through files, wide-area-networks, and direct memory access, as well as high-level APIs for data analysis. The internal architecture provides a set of reusable and extendable components for managing data presentation and transport mechanisms for new applications. ADIOS 2 bindings are available in C++11, C, Fortran, Python, and Matlab and are currently used across different scientific communities. ADIOS 2 provides a communal framework to tackle data management challenges as we approach the exascale era of supercomputing.
- Published
- 2020
8. Addressing data resiliency for staging based scientific workflows
- Author
-
Shaohua Duan, Pradeep Subedi, Philip E. Davis, and Manish Parashar
- Subjects
Cray XK7 ,020203 distributed computing ,Dataspaces ,Workflow ,Correctness ,Computer science ,Dataflow ,Distributed computing ,0202 electrical engineering, electronic engineering, information engineering ,020207 software engineering ,Anomaly detection ,02 engineering and technology - Abstract
As applications move towards extreme scales, data-related challenges are becoming significant concerns, and in-situ workflows based on data staging and in-situ/in-transit data processing have been proposed to address these challenges. Increasing scale is also expected to result in an increase in the rate of silent data corruption errors, which will impact both the correctness and performance of applications. Furthermore, this impact is amplified in the case of in-situ workflows due to the dataflow between the component applications of the workflow. While existing research has explored silent error detection at the application level, silent error detection for workflows remains an open challenge. This paper addresses silent error detection for extreme scale in-situ workflows. The presented approach leverages idle computation resource in data staging to enable timely detection and recovery from silent data corruption, effectively reducing the propagation of corrupted data and end-to-end workflow execution time in the presence of silent errors. As an illustration of this approach, we use a spatial outlier detection approach in staging to detect errors introduced in data transfer and storage. We also provide a CPU-GPU hybrid staging framework for error detection in order to achieve faster error identification. We have implemented our approach within the DataSpaces staging service, and evaluated it using both synthetic and real workflows on a Cray XK7 system (Titan) at different scales. We demonstrate that, in the presence of silent errors, enabling error detection on staged data alongside a checkpoint/restart scheme improves the total in-situ workflow execution time by up to 22% in comparison with using checkpoint/restart alone.
- Published
- 2019
- Full Text
- View/download PDF
9. Single-Event Characterization of the 16 nm FinFET Xilinx UltraScale+TM RFSoC Field-Programmable Gate Array under Proton Irradiation
- Author
-
Doug Thorpe, Philip E. Davis, Mark Learn, and David S. Lee
- Subjects
Physics ,Hardware_MEMORYSTRUCTURES ,Proton ,010308 nuclear & particles physics ,Event (computing) ,business.industry ,Hardware_PERFORMANCEANDRELIABILITY ,01 natural sciences ,Upset ,0103 physical sciences ,Optoelectronics ,Irradiation ,Hardware_ARITHMETICANDLOGICSTRUCTURES ,Field-programmable gate array ,business ,Hardware_LOGICDESIGN - Abstract
This study examines the single-event upset and single-event latch-up susceptibility of the Xilinx 16nm FinFET Zynq UltraScale+ RFSoC FPGA in proton irradiation. Results for SEU in configuration memory, BlockRAM memory, and device SEL are given.
- Published
- 2019
- Full Text
- View/download PDF
10. Towards a Smart, Internet-Scale Cache Service for Data Intensive Scientific Applications
- Author
-
Anthony Simonet, Ivan Rodero, Zhe Wang, Philip E. Davis, Yubo Qin, Azita Nouri, and Manish Parashar
- Subjects
Service (systems architecture) ,business.industry ,Computer science ,Scale (chemistry) ,Quality of service ,020206 networking & telecommunications ,Usability ,02 engineering and technology ,Information repository ,Data science ,Cyberinfrastructure ,Ocean Observatories Initiative ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Cache ,business - Abstract
Data and services provided by shared facilities, such as large-scale observing facilities, have become important enablers of scientific insights and discoveries across many science and engineering disciplines. Ensuring satisfactory quality of service can be challenging for facilities, due to their remote locations and to the distributed nature of the instruments, observatories, and users, as well as the rapid growth of data volumes and rates. This research explores how knowledge of the facilities usage patterns, coupled with emerging cyberinfrastructures can be leveraged to improve their performance, usability, and scientific impact. We propose a framework with a smart, internet-scale cache augmented with prefetching and data placement strategies to improve data delivery performance for scientific facilities. Our evaluations, which are based on the NSF Ocean Observatories Initiative, demonstrate that our framework is able to predict user requests and reduce data movements by more than 56% across networks.
- Published
- 2019
- Full Text
- View/download PDF
11. First coupled GENE–XGC microturbulence simulations
- Author
-
Junyi Cheng, Amitava Bhattacharjee, Philip E. Davis, Gabriele Merlo, Robert Hager, Salomon Janhunen, Frank Jenko, Kai Germaschewski, Scott Klasky, C.S. Chang, Julien Dominski, E. Suchyta, and Scott Parker
- Subjects
Physics ,Gyrokinetic ElectroMagnetic ,Numerical analysis ,Context (language use) ,Condensed Matter Physics ,Topology ,Grid ,01 natural sciences ,010305 fluids & plasmas ,Coupling (physics) ,Physics::Plasma Physics ,Frequency domain ,0103 physical sciences ,Microturbulence ,Poisson's equation ,010306 general physics - Abstract
Covering the core and the edge region of a tokamak, respectively, the two gyrokinetic turbulence codes Gyrokinetic Electromagnetic Numerical Experiment (GENE) and X-point Gyrokinetic Code (XGC) have been successfully coupled by exchanging three-dimensional charge density data needed to solve the gyrokinetic Poisson equation over the entire spatial domain. Certain challenges for the coupling procedure arise from the fact that the two codes employ completely different numerical methods. This includes, in particular, the necessity to introduce mapping procedures for the transfer of data between the unstructured triangular mesh of XGC and the logically rectangular grid (in a combination of real and Fourier space) used by GENE. Constraints on the coupling scheme are also imposed by the use of different time integrators. First, coupled simulations are presented. We have considered collisionless ion temperature gradient turbulence, in both circular and fully shaped plasmas. Coupled simulations successfully reproduce both GENE and XGC reference results, confirming the validity of the code coupling approach toward a whole device model. Many lessons learned in the present context, in particular, the need for a coupling procedure as flexible as possible, should be valuable to our and other efforts to couple different kinds of codes in pursuit of a more comprehensive description of complex real-world systems and will drive our further developments of a whole device model for fusion plasmas.
- Published
- 2021
- Full Text
- View/download PDF
12. Coupling Exascale Multiphysics Applications: Methods and Lessons Learned
- Author
-
Choong-Seock Chang, Mark Ainsworth, Dave Pugmire, Frank Jenko, Greg Eisenhauer, Stephane Ethier, Allen D. Malony, Matthew Wolf, Franck Cappello, Kenneth Moreland, Norbert Podhorszki, Seung-Hoe Ku, Manish Parashar, Mark Kim, Scott Klasky, Sheng Di, Tom Peterka, Berk Geveci, Ozan Tugluk, Ben Whitney, Jong Youl Choi, Philip E. Davis, Julien Dominski, Ian Foster, Kshitij Mehta, Todd Munson, Hanqi Guo, E. Suchyta, Kevin Huck, Bryce Allen, Jeremy Logan, Chad Wood, Gabriele Merlo, James Kress, Qing Liu, Ruonan Wang, and Michael Churchill
- Subjects
Coupling ,Computational complexity theory ,Computer science ,Multiphysics ,010103 numerical & computational mathematics ,01 natural sciences ,010305 fluids & plasmas ,Online analysis ,Visualization ,Titan (supercomputer) ,Computer architecture ,0103 physical sciences ,Performance monitoring ,Workflow scheduling ,0101 mathematics - Abstract
With the growing computational complexity of science and the complexity of new and emerging hardware, it is time to re-evaluate the traditional monolithic design of computational codes. One new paradigm is constructing larger scientific computational experiments from the coupling of multiple individual scientific applications, each targeting their own physics, characteristic lengths, and/or scales. We present a framework constructed by leveraging capabilities such as in-memory communications, workflow scheduling on HPC resources, and continuous performance monitoring. This code coupling capability is demonstrated by a fusion science scenario, where differences between the plasma at the edges and at the core of a device have different physical descriptions. This infrastructure not only enables the coupling of the physics components, but it also connects in situ or online analysis, compression, and visualization that accelerate the time between a run and the analysis of the science content. Results from runs on Titan and Cori are presented as a demonstration.
- Published
- 2018
- Full Text
- View/download PDF
13. The SFR-M$_*$ Correlation Extends to Low Mass at High Redshift
- Author
-
Romeel Davé, Dritan Kodra, Philip E. Davis, Eric Gawiser, Rachel S. Somerville, Anton M. Koekemoer, Steven L. Finkelstein, J. A. Newman, Kartheik Iyer, P. Kurczynski, and Camilla Pacifici
- Subjects
Physics ,FOS: Physical sciences ,Astronomy and Astrophysics ,Astrophysics::Cosmology and Extragalactic Astrophysics ,Astrophysics ,01 natural sciences ,Astrophysics - Astrophysics of Galaxies ,Redshift ,photometric [techniques] ,Correlation ,010104 statistics & probability ,Space and Planetary Science ,Astrophysics of Galaxies (astro-ph.GA) ,0103 physical sciences ,star formation [galaxies] ,0101 mathematics ,Low Mass ,010303 astronomy & astrophysics ,Astrophysics::Galaxy Astrophysics ,evolution [galaxies] - Abstract
To achieve a fuller understanding of galaxy evolution, SED fitting can be used to recover quantities beyond stellar masses (M$_*$) and star formation rates (SFRs). We use Star Formation Histories (SFHs) reconstructed via the Dense Basis method of Iyer \& Gawiser (2017) for a sample of $17,873$ galaxies at $0.54$. The evolution of the correlation is well described by $\log SFR= (0.80\pm 0.029 - 0.017\pm 0.010\times t_{univ})\log M_*$ $- (6.487\pm 0.282-0.039\pm 0.008\times t_{univ})$, where $t_{univ}$ is the age of the universe in Gyr., Comment: 22 pages, 10 figures. Accepted for publication in ApJ
- Published
- 2018
- Full Text
- View/download PDF
14. Scalable Parallelization of a Markov Coalescent Genealogy Sampler
- Author
-
Greg Wolffe, Adam M. Terwilliger, David Zeitler, and Philip E. Davis
- Subjects
0301 basic medicine ,Theoretical computer science ,Markov chain ,Computer science ,Sampling (statistics) ,Population genetics ,Markov process ,Markov chain Monte Carlo ,Parallel computing ,Scalable parallelism ,Genealogy ,Coalescent theory ,03 medical and health sciences ,CUDA ,symbols.namesake ,030104 developmental biology ,Scalability ,symbols ,Leverage (statistics) - Abstract
Coalescent genealogy samplers are effective tools for the study of population genetics. They are used to estimate the historical parameters of a population based upon the sampling of present-day genetic information. A popular approach employs Markov chain Monte Carlo (MCMC) methods. While effective, these methods are very computationally intensive, often taking weeks to run. Although attempts have been made to leverage parallelism in an effort to reduce runtimes, they have not resulted in scalable solutions. Due to the inherently sequential nature of MCMC methods, their performance has suffered diminishing returns when applied to large-scale computing clusters. In the interests of reduced runtimes and higher quality solutions, a more sophisticated form of parallelism is required. This paper describes a novel way to apply a recently discovered generalization of MCMC for this purpose. The new approach exploits the multiple-proposal mechanism of the generalized method to enable the desired scalable parallelism while maintaining the accuracy of the original technique.
- Published
- 2017
- Full Text
- View/download PDF
15. Computing Just What You Need: Online Data Analysis and Reduction at Extreme Scales
- Author
-
Ian Foster, Mark Ainsworth, Bryce Allen, Julie Bessac, Franck Cappello, Jong Youl Choi, Emil Constantinescu, Philip E. Davis, Sheng Di, Wendy Di, Hanqi Guo, Scott Klasky, Kerstin Kleese Van Dam, Tahsin Kurc, Qing Liu, Abid Malik, Kshitij Mehta, Klaus Mueller, Todd Munson, George Ostouchov, Manish Parashar, Tom Peterka, Line Pouchard, Dingwen Tao, Ozan Tugluk, Stefan Wild, Matthew Wolf, Justin M. Wozniak, Wei Xu, and Shinjae Yoo
- Subjects
Focus (computing) ,Computer science ,business.industry ,Computation ,020207 software engineering ,02 engineering and technology ,Supercomputer ,Data science ,Reduction (complexity) ,Software ,0202 electrical engineering, electronic engineering, information engineering ,Programming paradigm ,Systems design ,020201 artificial intelligence & image processing ,business - Abstract
A growing disparity between supercomputer computation speeds and I/O rates makes it increasingly infeasible for applications to save all results for offline analysis. Instead, applications must analyze and reduce data online so as to output only those results needed to answer target scientific question(s). This change in focus complicates application and experiment design and introduces algorithmic, implementation, and programming model challenges that are unfamiliar to many scientists and that have major implications for the design of various elements of supercomputer systems. I review these challenges and describe methods and tools that various groups, including mine, are developing to enable experimental exploration of algorithmic, software, and system design alternatives.
- Published
- 2017
- Full Text
- View/download PDF
16. 'ACTION' AND 'CAUSE OF ACTION'
- Author
-
Philip E. Davis
- Subjects
Philosophy ,Action (philosophy) ,Cause of action ,Psychology ,Neuroscience - Published
- 1962
- Full Text
- View/download PDF
17. Modern Logic in the Service of Law. Ilmar Tammelo
- Author
-
Philip E. Davis
- Subjects
Service (business) ,Philosophy ,Law ,Business - Published
- 1981
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.