77 results on '"Thomas D. Uram"'
Search Results
2. ExaWorks: Workflows for Exascale.
- Author
-
Aymen Al-Saadi, Dong H. Ahn, Yadu N. Babuji, Kyle Chard, James Corbett, Mihael Hategan, Stephen Herbein, Shantenu Jha, Daniel E. Laney, André Merzky, Todd S. Munson, Michael Salim, Mikhail Titov, Matteo Turilli, Thomas D. Uram, and Justin M. Wozniak
- Published
- 2021
- Full Text
- View/download PDF
3. Stream-AI-MD: streaming AI-driven adaptive molecular simulations for heterogeneous computing platforms.
- Author
-
Alexander Brace, Michael Salim, Vishal Subbiah, Heng Ma, Murali Emani, Anda Trifan, Austin R. Clyde, Corey Adams, Thomas D. Uram, Hyun Seung Yoo, Andew Hock, Jessica Liu, Venkatram Vishwanath, and Arvind Ramanathan
- Published
- 2021
- Full Text
- View/download PDF
4. Extreme Scale Survey Simulation with Python Workflows.
- Author
-
Antonio Villarreal, Yadu N. Babuji, Thomas D. Uram, Daniel S. Katz, Kyle Chard, and Katrin Heitmann
- Published
- 2021
- Full Text
- View/download PDF
5. Enabling discovery data science through cross-facility workflows.
- Author
-
Katerina B. Antypas, Deborah Bard, Johannes P. Blaschke, Shane Richard Canon, Bjoern Enders, Mallikarjun Arjun Shankar, Suhas Somnath, Dale Stansberry, Thomas D. Uram, and Sean R. Wilkinson
- Published
- 2021
- Full Text
- View/download PDF
6. AxonEM Dataset: 3D Axon Instance Segmentation of Brain Cortical Regions.
- Author
-
Donglai Wei 0001, Kisuk Lee, Hanyu Li, Ran Lu, J. Alexander Bae, Zequan Liu, Lifu Zhang, Márcia dos Santos, Zudi Lin, Thomas D. Uram, Xueying Wang, Ignacio Arganda-Carreras, Brian Matejek, Narayanan Kasthuri, Jeff Lichtman, and Hanspeter Pfister
- Published
- 2021
- Full Text
- View/download PDF
7. Toward an Automated HPC Pipeline for Processing Large Scale Electron Microscopy Data.
- Author
-
Rafael Vescovi, Hanyu Li, Jeffery Kinnison, Murat Keçeli, Misha Salim, Narayanan Kasthuri, Thomas D. Uram, and Nicola J. Ferrier
- Published
- 2020
- Full Text
- View/download PDF
8. Balsam: Near Real-Time Experimental Data Analysis on Supercomputers.
- Author
-
Michael A. Salim, Thomas D. Uram, J. Taylor Childers, Venkatram Vishwanath, and Michael E. Papka
- Published
- 2019
- Full Text
- View/download PDF
9. Scaling Distributed Training of Flood-Filling Networks on HPC Infrastructure for Brain Mapping.
- Author
-
Wushi Dong, Nicola J. Ferrier, Narayanan Kasthuri, Peter Littlewood, Murat Keçeli, Rafael Vescovi, Hanyu Li, Corey Adams, Elise Jennings, Samuel Flender, Thomas D. Uram, and Venkatram Vishwanath
- Published
- 2019
- Full Text
- View/download PDF
10. DeepHyper: Asynchronous Hyperparameter Search for Deep Neural Networks.
- Author
-
Prasanna Balaprakash, Michael Salim, Thomas D. Uram, Venkat Vishwanath, and Stefan M. Wild
- Published
- 2018
- Full Text
- View/download PDF
11. Coupling LAMMPS and the vl3 Framework for Co-Visualization of Atomistic Simulations.
- Author
-
Silvio Rizzi 0001, Mark Hereld, Joseph A. Insley, Preeti Malakar, Michael E. Papka, Thomas D. Uram, and Venkatram Vishwanath
- Published
- 2016
- Full Text
- View/download PDF
12. Parallel distributed, GPU-accelerated, advanced lighting calculations for large-scale volume visualization.
- Author
-
Min Shih, Silvio Rizzi 0001, Joseph A. Insley, Thomas D. Uram, Venkatram Vishwanath, Mark Hereld, Michael E. Papka, and Kwan-Liu Ma
- Published
- 2016
- Full Text
- View/download PDF
13. Modeling Cooperative Threads to Project GPU Performance for Adaptive Parallelism.
- Author
-
Jiayuan Meng, Thomas D. Uram, Vitali A. Morozov, Venkatram Vishwanath, and Kalyan Kumaran
- Published
- 2015
- Full Text
- View/download PDF
14. Large-Scale Parallel Visualization of Particle-Based Simulations using Point Sprites and Level-Of-Detail.
- Author
-
Silvio Rizzi 0001, Mark Hereld, Joseph A. Insley, Michael E. Papka, Thomas D. Uram, and Venkatram Vishwanath
- Published
- 2015
- Full Text
- View/download PDF
15. Performance Modeling of vl3 Volume Rendering on GPU-Based Clusters.
- Author
-
Silvio Rizzi 0001, Mark Hereld, Joseph A. Insley, Michael E. Papka, Thomas D. Uram, and Venkatram Vishwanath
- Published
- 2014
- Full Text
- View/download PDF
16. PDACS: a portal for data analysis services for cosmological simulations.
- Author
-
Ryan Chard, Saba Sehrish, Alex A. Rodriguez, Ravi K. Madduri, Thomas D. Uram, Marc F. Paterno, Katrin Heitmann, Shreyas Cholia, Jim Kowalkowski, and Salman Habib
- Published
- 2014
- Full Text
- View/download PDF
17. The LSST DESC DC2 Simulated Sky Survey
- Author
-
Bela Abolfathi, David Alonso, Robert Armstrong, Éric Aubourg, Humna Awan, Yadu N. Babuji, Franz Erik Bauer, Rachel Bean, George Beckett, Rahul Biswas, Joanne R. Bogart, Dominique Boutigny, Kyle Chard, James Chiang, Chuck F. Claver, Johann Cohen-Tanugi, Céline Combet, Andrew J. Connolly, Scott F. Daniel, Seth W. Digel, Alex Drlica-Wagner, Richard Dubois, Emmanuel Gangler, Eric Gawiser, Thomas Glanzman, Phillipe Gris, Salman Habib, Andrew P. Hearin, Katrin Heitmann, Fabio Hernandez, Renée Hložek, Joseph Hollowed, Mustapha Ishak, Željko Ivezić, Mike Jarvis, Saurabh W. Jha, Steven M. Kahn, J. Bryce Kalmbach, Heather M. Kelly, Eve Kovacs, Danila Korytov, K. Simon Krughoff, Craig S. Lage, François Lanusse, Patricia Larsen, Laurent Le Guillou, Nan Li, Emily Phillips Longley, Robert H. Lupton, Rachel Mandelbaum, Yao-Yuan Mao, Phil Marshall, Joshua E. Meyers, Marc Moniez, Christopher B. Morrison, Andrei Nomerotski, Paul O’Connor, HyeYun Park, Ji Won Park, Julien Peloton, Daniel Perrefort, James Perry, Stéphane Plaszczynski, Adrian Pope, Andrew Rasmussen, Kevin Reil, Aaron J. Roodman, Eli S. Rykoff, F. Javier Sánchez, Samuel J. Schmidt, Daniel Scolnic, Christopher W. Stubbs, J. Anthony Tyson, Thomas D. Uram, Antonio Villarreal, Christopher W. Walter, Matthew P. Wiesner, W. Michael Wood-Vasey, and Joe Zuntz
- Published
- 2021
- Full Text
- View/download PDF
18. An Analysis of a Distributed GPU Implementation of Proton Computed Tomographic (pCT) Reconstruction.
- Author
-
Kirk L. Duffin, Nicholas T. Karonis, Caesar E. Ordoñez, Michael E. Papka, George Coutrakon, Béla Erdélyi, Eric C. Olson, and Thomas D. Uram
- Published
- 2012
- Full Text
- View/download PDF
19. GROPHECY: GPU performance projection from CPU code skeletons.
- Author
-
Jiayuan Meng, Vitali A. Morozov, Kalyan Kumaran, Venkatram Vishwanath, and Thomas D. Uram
- Published
- 2011
- Full Text
- View/download PDF
20. A solution looking for lots of problems: generic portals for science infrastructure.
- Author
-
Thomas D. Uram, Michael E. Papka, Mark Hereld, and Michael Wilde
- Published
- 2011
- Full Text
- View/download PDF
21. Accelerating science gateway development with Web 2.0 and Swift.
- Author
-
Wenjun Wu 0001, Thomas D. Uram, Michael Wilde, Mark Hereld, and Michael E. Papka
- Published
- 2010
- Full Text
- View/download PDF
22. The Problem Solving Environments of TeraGrid, Science Gateways, and the Intersection of the Two.
- Author
-
Jim Basney, Stuart Martin, John-Paul Navarro, Marlon E. Pierce, Tom Scavo, Leif Strand, Thomas D. Uram, Nancy Wilkins-Diehr, Wenjun Wu 0001, and Choonhan Youn
- Published
- 2008
- Full Text
- View/download PDF
23. Collaboration as a second thought.
- Author
-
Mark Hereld, Michael E. Papka, and Thomas D. Uram
- Published
- 2008
- Full Text
- View/download PDF
24. An Infrastructure of Network Services for Seamless Integration in Advanced Collaborative Computing Environments.
- Author
-
Han Gao, Ivan R. Judson, Thomas D. Uram, S. Lefvert, Terry Disz, Michael E. Papka, and Rick L. Stevens
- Published
- 2005
- Full Text
- View/download PDF
25. Capability matching of data streams with network services.
- Author
-
Han Gao, Ivan R. Judson, Thomas D. Uram, Terry Disz, Michael E. Papka, and Rick L. Stevens
- Published
- 2004
- Full Text
- View/download PDF
26. Streaming ultra high resolution images to large tiled display at nearly interactive frame rate with vl3.
- Author
-
Jie Jiang, Mark Hereld, Joseph A. Insley, Michael E. Papka, Silvio Rizzi 0001, Thomas D. Uram, and Venkatram Vishwanath
- Published
- 2015
- Full Text
- View/download PDF
27. Large-scale co-visualization for LAMMPS using vl3.
- Author
-
Silvio Rizzi 0001, Mark Hereld, Joseph A. Insley, Michael E. Papka, Thomas D. Uram, and Venkatram Vishwanath
- Published
- 2015
- Full Text
- View/download PDF
28. Enabling discovery data science through cross-facility workflows
- Author
-
K. B. Antypas, D. J. Bard, J. P. Blaschke, R. Shane Canon, Bjoern Enders, Mallikarjun Arjun Shankar, Suhas Somnath, Dale Stansberry, Thomas D. Uram, and Sean R. Wilkinson
- Published
- 2021
29. Web 2.0-based social informatics data grid.
- Author
-
Wenjun Wu 0001, Thomas D. Uram, and Michael E. Papka
- Published
- 2009
- Full Text
- View/download PDF
30. Large-scale dendritic spine extraction and analysis through petascale computing
- Author
-
Nicola J. Ferrier, Griffin Badalamente, Thomas D. Uram, Gregg A. Wildenberg, Hanyu Li, and Narayanan Kasthuri
- Subjects
Connectomics ,Dendritic spine ,business.industry ,Pipeline (computing) ,Pattern recognition ,Terabyte ,Supercomputer ,Petascale computing ,medicine.anatomical_structure ,medicine ,Segmentation ,Soma ,Artificial intelligence ,business - Abstract
The synapse is a central player in the nervous system serving as the key structure that permits the relay of electrical and chemical signals from one neuron to another. The anatomy of the synapse contains important information about the signals and the strength of signal it transmits. Because of their small size, however, electron microscopy (EM) is the only method capable of directly visualizing synapse morphology and remains the gold standard for studying synapse morphology. Historically, EM has been limited to small fields of view and often only in 2D, but recent advances in automated serial EM (“connectomics”) have enabled collecting large EM volumes that capture significant fractions of neurons and the different classes of synapses they receive (i.e. shaft, spine, soma, axon). However, even with recent advances in automatic segmentation methods, extracting neuronal and synaptic profiles from these connectomics datasets are difficult to scale over large EM volumes. Without methods that speed up automatic segmentation over large volumes, the full potential of utilizing these new EM methods to advance studies related to synapse morphologies will never be fully realized. To solve this problem, we describe our work to leverage Argonne leadership-scale supercomputers for segmentation of a 0.6 terabyte dataset using state of the art machine learning-based segmentation methods on a significant fraction of the 11.69 petaFLOPs supercomputer Theta at Argonne National Laboratory. We describe an iterative pipeline that couples human and machine feedback to produce accurate segmentation results in time frames that will make connectomics a more routine method for exploring how synapse biology changes across a number of biological conditions. Finally, we demonstrate how dendritic spines can be algorithmically extracted from the segmentation dataset for analysis of spine morphologies. Advancing this effort at large compute scale is expected to yield benefits in turnaround time for segmentation of individual datasets, accelerating the path to biology results and providing population-level insight into how thousands of synapses originate from different neurons; we expect to also reap benefits in terms of greater accuracy from the more compute-intensive algorithms these systems enable.
- Published
- 2021
31. Stream-AI-MD
- Author
-
Heng Ma, Hyunseung Yoo, Vishal Subbiah, Thomas D. Uram, Alexander Brace, Austin Clyde, Corey Adams, Jessica Liu, Venkatram Vishwanath, Andew Hock, Michael A. Salim, Murali Emani, Anda Trifa, and Arvind Ramanathan
- Subjects
Computer science ,business.industry ,Deep learning ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,Molecular biophysics ,Symmetric multiprocessor system ,Parallel computing ,Folding (DSP implementation) ,Supercomputer ,ComputingMethodologies_PATTERNRECOGNITION ,Workflow ,Leverage (statistics) ,Artificial intelligence ,business - Abstract
Emerging hardware tailored for artificial intelligence (AI) and machine learning (ML) methods provide novel means to couple them with traditional high performance computing (HPC) workflows involving molecular dynamics (MD) simulations. We propose Stream-AI-MD, a novel instance of applying deep learning methods to drive adaptive MD simulation campaigns in a streaming manner. We leverage the ability to run ensemble MD simulations on GPU clusters, while the data from atomistic MD simulations are streamed continuously to AI/ML approaches to guide the conformational search in a biophysically meaningful manner on a wafer-scale AI accelerator. We demonstrate the efficacy of Stream-AI-MD simulations for two scientific use-cases: (1) folding a small prototypical protein, namely ββα-fold (BBA) FSD-EY and (2) understanding protein-protein interaction (PPI) within the SARS-CoV-2 proteome between two proteins, nsp16 and nsp10. We show that Stream-AI-MD simulations can improve time-to-solution by ~50X for BBA protein folding. Further, we also discuss performance trade-offs involved in implementing AI-coupled HPC workflows on heterogeneous computing architectures.
- Published
- 2021
32. The LSST DESC DC2 Simulated Sky Survey
- Author
-
Kevin Reil, Adrian Pope, Kyle Chard, Mustapha Ishak, D. Boutigny, Humna Awan, H. Kelly, Laurent Le Guillou, W. Michael Wood-Vasey, Eli S. Rykoff, Stéphane Plaszczynski, Rahul Biswas, Richard Dubois, Saurabh Jha, Danila Korytov, C. W. Walter, J. Anthony Tyson, Katrin Heitmann, T. Glanzman, Fabio Hernandez, François Lanusse, F. Javier Sánchez, Joe Zuntz, Željko Ivezić, Marc Moniez, Yadu Babuji, HyeYun Park, Christopher W. Stubbs, Franz E. Bauer, Phillipe Gris, Chuck Claver, Paul O'Connor, J. Meyers, Christopher B. Morrison, George Beckett, Joseph Hollowed, Seth Digel, Andrew Rasmussen, Céline Combet, Phil Marshall, Éric Aubourg, Rachel Mandelbaum, J. Perry, Mike Jarvis, Thomas D. Uram, K. Simon Krughoff, Johann Cohen-Tanugi, Scott F. Daniel, Yao-Yuan Mao, Matthew P. Wiesner, James Chiang, Bela Abolfathi, Daniel Scolnic, Craig S. Lage, Ji Won Park, Steven M. Kahn, Eric Gawiser, Antonio Villarreal, A. Roodman, E. Gangler, Nan Li, Rachel Bean, David Alonso, Emily Phillips Longley, Andrei Nomerotski, Andrew P. Hearin, Salman Habib, Daniel Perrefort, Andrew J. Connolly, J. Peloton, J. Bryce Kalmbach, Eve Kovacs, Patricia Larsen, Alex Drlica-Wagner, Renée Hložek, Robert Armstrong, J.R. Bogart, Samuel Schmidt, Robert H. Lupton, AstroParticule et Cosmologie (APC (UMR_7164)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Institut National de Physique Nucléaire et de Physique des Particules du CNRS (IN2P3)-Observatoire de Paris, Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Centre National de la Recherche Scientifique (CNRS)-Université Paris Cité (UPCité), Laboratoire d'Annecy de Physique des Particules (LAPP), Institut National de Physique Nucléaire et de Physique des Particules du CNRS (IN2P3)-Université Savoie Mont Blanc (USMB [Université de Savoie] [Université de Chambéry])-Centre National de la Recherche Scientifique (CNRS), Laboratoire Univers et Particules de Montpellier (LUPM), Institut National de Physique Nucléaire et de Physique des Particules du CNRS (IN2P3)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS), Laboratoire de Physique de Clermont (LPC), Institut National de Physique Nucléaire et de Physique des Particules du CNRS (IN2P3)-Centre National de la Recherche Scientifique (CNRS)-Université Clermont Auvergne (UCA), Laboratoire de Physique Subatomique et de Cosmologie (LPSC), Institut National de Physique Nucléaire et de Physique des Particules du CNRS (IN2P3)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA), Centre de Calcul de l'IN2P3 (CC-IN2P3), Institut National de Physique Nucléaire et de Physique des Particules du CNRS (IN2P3)-Centre National de la Recherche Scientifique (CNRS), Astrophysique Interprétation Modélisation (AIM (UMR_7158 / UMR_E_9005 / UM_112)), Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Institut national des sciences de l'Univers (INSU - CNRS)-Université Paris Diderot - Paris 7 (UPD7)-Centre National de la Recherche Scientifique (CNRS), Laboratoire de Physique Nucléaire et de Hautes Énergies (LPNHE (UMR_7585)), Institut National de Physique Nucléaire et de Physique des Particules du CNRS (IN2P3)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS), Laboratoire de Physique des 2 Infinis Irène Joliot-Curie (IJCLab), Institut National de Physique Nucléaire et de Physique des Particules du CNRS (IN2P3)-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS), LSST Dark Energy Science, Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Centre National de la Recherche Scientifique (CNRS)-Université de Paris (UP), Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Institut National de Physique Nucléaire et de Physique des Particules du CNRS (IN2P3)-Université Montpellier 2 - Sciences et Techniques (UM2), Centre National de la Recherche Scientifique (CNRS)-Institut national des sciences de l'Univers (INSU - CNRS)-Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Université Paris Diderot - Paris 7 (UPD7), and Institut National de Physique Nucléaire et de Physique des Particules du CNRS (IN2P3)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Université de Paris (UP)
- Subjects
Cosmology and Nongalactic Astrophysics (astro-ph.CO) ,Data products ,media_common.quotation_subject ,FOS: Physical sciences ,Image processing software ,Sky surveys ,01 natural sciences ,Field (computer science) ,Observatory ,0103 physical sciences ,N-body simulations ,[PHYS.PHYS.PHYS-INS-DET]Physics [physics]/Physics [physics]/Instrumentation and Detectors [physics.ins-det] ,Deep drilling ,Instrumentation and Methods for Astrophysics (astro-ph.IM) ,010303 astronomy & astrophysics ,media_common ,Remote sensing ,Physics ,010308 nuclear & particles physics ,Testbed ,Astronomy and Astrophysics ,Cosmology ,Space and Planetary Science ,Sky ,Simulated data ,Astrophysics - Instrumentation and Methods for Astrophysics ,Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
We describe the simulated sky survey underlying the second data challenge (DC2) carried out in preparation for analysis of the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) by the LSST Dark Energy Science Collaboration (LSST DESC). Significant connections across multiple science domains will be a hallmark of LSST; the DC2 program represents a unique modeling effort that stresses this interconnectivity in a way that has not been attempted before. This effort encompasses a full end-to-end approach: starting from a large N-body simulation, through setting up LSST-like observations including realistic cadences, through image simulations, and finally processing with Rubin's LSST Science Pipelines. This last step ensures that we generate data products resembling those to be delivered by the Rubin Observatory as closely as is currently possible. The simulated DC2 sky survey covers six optical bands in a wide-fast-deep (WFD) area of approximately 300 deg^2 as well as a deep drilling field (DDF) of approximately 1 deg^2. We simulate 5 years of the planned 10-year survey. The DC2 sky survey has multiple purposes. First, the LSST DESC working groups can use the dataset to develop a range of DESC analysis pipelines to prepare for the advent of actual data. Second, it serves as a realistic testbed for the image processing software under development for LSST by the Rubin Observatory. In particular, simulated data provide a controlled way to investigate certain image-level systematic effects. Finally, the DC2 sky survey enables the exploration of new scientific ideas in both static and time-domain cosmology., 39 pages, 19 figures, version accepted for publication in ApJS
- Published
- 2021
33. Extreme Scale Survey Simulation with Python Workflows
- Author
-
Thomas D. Uram, Katrin Heitmann, Kyle Chard, Daniel S. Katz, Yadu Babuji, and Antonio Villarreal
- Subjects
FOS: Computer and information sciences ,Computer science ,business.industry ,Distributed computing ,FOS: Physical sciences ,Python (programming language) ,Pipeline (software) ,Data set ,Software portability ,Software ,Workflow ,Computer Science - Distributed, Parallel, and Cluster Computing ,Scalability ,Code (cryptography) ,Distributed, Parallel, and Cluster Computing (cs.DC) ,Astrophysics - Instrumentation and Methods for Astrophysics ,business ,computer ,Instrumentation and Methods for Astrophysics (astro-ph.IM) ,computer.programming_language - Abstract
The Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) will soon carry out an unprecedented wide, fast, and deep survey of the sky in multiple optical bands. The data from LSST will open up a new discovery space in astronomy and cosmology, simultaneously providing clues toward addressing burning issues of the day, such as the origin of dark energy and and the nature of dark matter, while at the same time yielding data that will, in turn, pose fresh new questions. To prepare for the imminent arrival of this remarkable data set, it is crucial that the associated scientific communities be able to develop the software needed to analyze it. Computational power now available allows us to generate synthetic data sets that can be used as a realistic training ground for such an effort. This effort raises its own challenges -- the need to generate very large simulations of the night sky, scaling up simulation campaigns to large numbers of compute nodes across multiple computing centers with different architectures, and optimizing the complex workload around memory requirements and widely varying wall clock times. We describe here a large-scale workflow that melds together Python code to steer the workflow, Parsl to manage the large-scale distributed execution of workflow components, and containers to carry out the image simulation campaign across multiple sites. Taking advantage of these tools, we developed an extreme-scale computational framework and used it to simulate five years of observations for 300 square degrees of sky area. We describe our experiences and lessons learned in developing this workflow capability, and highlight how the scalability and portability of our approach enabled us to efficiently execute it on up to 4000 compute nodes on two supercomputers., Comment: Proceeding for eScience 2021, 9 pages, 5 figures
- Published
- 2021
- Full Text
- View/download PDF
34. AxonEM Dataset: 3D Axon Instance Segmentation of Brain Cortical Regions
- Author
-
Hanspeter Pfister, Ignacio Arganda-Carreras, Hanyu Li, Zudi Lin, Zequan Liu, Thomas D. Uram, Lifu Zhang, Brian Matejek, J. Alexander Bae, Xueying Wang, Márcia dos Santos, Donglai Wei, Jeff W. Lichtman, Ran Lu, Kisuk Lee, and Narayanan Kasthuri
- Subjects
Ground truth ,Mouse cortex ,Computer science ,business.industry ,Ground truth segmentation ,Pattern recognition ,Reconstruction method ,medicine.anatomical_structure ,Cortex (anatomy) ,Biological neural network ,medicine ,Segmentation ,Artificial intelligence ,Axon ,business - Abstract
Electron microscopy (EM) enables the reconstruction of neural circuits at the level of individual synapses, which has been transformative for scientific discoveries. However, due to the complex morphology, an accurate reconstruction of cortical axons has become a major challenge. Worse still, there is no publicly available large-scale EM dataset from the cortex that provides dense ground truth segmentation for axons, making it difficult to develop and evaluate large-scale axon reconstruction methods. To address this, we introduce the AxonEM dataset, which consists of two \(30\times 30\times 30~\mu \)m\(^3\) EM image volumes from the human and mouse cortex, respectively. We thoroughly proofread over 18,000 axon instances to provide dense 3D axon instance segmentation, enabling large-scale evaluation of axon reconstruction methods. In addition, we densely annotate nine ground truth subvolumes for training, per each data volume. With this, we reproduce two published state-of-the-art methods and provide their evaluation results as a baseline. We publicly release our code and data at https://connectomics-bazaar.github.io/proj/AxonEM/index.html to foster the development of advanced methods.
- Published
- 2021
35. Augmenting views on large format displays with tablets.
- Author
-
Phil Lindner, Adolfo Rodriguez, Thomas D. Uram, and Michael E. Papka
- Published
- 2014
- Full Text
- View/download PDF
36. A Web 2.0-Based Scientific Application Framework.
- Author
-
Wenjun Wu 0001, Thomas D. Uram, Michael Wilde, Mark Hereld, and Michael E. Papka
- Published
- 2010
- Full Text
- View/download PDF
37. Toward an Automated HPC Pipeline for Processing Large Scale Electron Microscopy Data
- Author
-
Jeffery Kinnison, Narayanan Kasthuri, Rafael Vescovi, Murat Keçeli, Misha Salim, Thomas D. Uram, Nicola J. Ferrier, and Hanyu Li
- Subjects
FOS: Computer and information sciences ,Workstation ,Computer science ,Pipeline (computing) ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,law.invention ,Computational science ,03 medical and health sciences ,0302 clinical medicine ,Data visualization ,Software ,law ,FOS: Electrical engineering, electronic engineering, information engineering ,030304 developmental biology ,0303 health sciences ,business.industry ,Image and Video Processing (eess.IV) ,Electrical Engineering and Systems Science - Image and Video Processing ,Modular design ,Supercomputer ,Visualization ,Computer Science - Distributed, Parallel, and Cluster Computing ,Parallel processing (DSP implementation) ,Distributed, Parallel, and Cluster Computing (cs.DC) ,business ,030217 neurology & neurosurgery - Abstract
We present a fully modular and scalable software pipeline for processing electron microscope (EM) images of brain slices into 3D visualization of individual neurons and demonstrate an end-to-end segmentation of a large EM volume using a supercomputer. Our pipeline scales multiple packages used by the EM community with minimal changes to the original source codes. We tested each step of the pipeline individually, on a workstation, a cluster, and a supercomputer. Furthermore, we can compose workflows from these operations using a Balsam database that can be triggered during the data acquisition or with the use of different front ends and control the granularity of the pipeline execution. We describe the implementation of our pipeline and modifications required to integrate and scale up existing codes. The modular nature of our environment enables diverse research groups to contribute to the pipeline without disrupting the workflow, i.e. new individual codes can be easily integrated for each step on the pipeline.
- Published
- 2020
38. The Mira-Titan Universe. III. Emulation of the Halo Mass Function
- Author
-
Thomas D. Uram, Nicholas Frontiere, Sebastian Bocquet, Salman Habib, Hal Finkel, Katrin Heitmann, Adrian Pope, and Earl Lawrence
- Subjects
Physics ,Particle physics ,Cosmology and Nongalactic Astrophysics (astro-ph.CO) ,010504 meteorology & atmospheric sciences ,media_common.quotation_subject ,Halo mass function ,FOS: Physical sciences ,Astronomy and Astrophysics ,Astrophysics::Cosmology and Extragalactic Astrophysics ,01 natural sciences ,Omega ,Redshift ,Universe ,Space and Planetary Science ,0103 physical sciences ,Dark energy ,Hypercube ,Halo ,Flatness (cosmology) ,010303 astronomy & astrophysics ,Astrophysics::Galaxy Astrophysics ,0105 earth and related environmental sciences ,media_common ,Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
We construct an emulator for the halo mass function over group and cluster mass scales for a range of cosmologies, including the effects of dynamical dark energy and massive neutrinos. The emulator is based on the recently completed Mira-Titan Universe suite of cosmological $N$-body simulations. The main set of simulations spans 111 cosmological models with 2.1 Gpc boxes. We extract halo catalogs in the redshift range $z=[0.0, 2.0]$ and for masses $M_{200\mathrm{c}}\geq 10^{13}M_\odot/h$. The emulator covers an 8-dimensional hypercube spanned by {$\Omega_\mathrm{m}h^2$, $\Omega_\mathrm{b}h^2$, $\Omega_\nu h^2$, $\sigma_8$, $h$, $n_s$, $w_0$, $w_a$}; spatial flatness is assumed. We obtain smooth halo mass functions by fitting piecewise second-order polynomials to the halo catalogs and employ Gaussian process regression to construct the emulator while keeping track of the statistical noise in the input halo catalogs and uncertainties in the regression process. For redshifts $z\lesssim1$, the typical emulator precision is better than $2\%$ for $10^{13}-10^{14} M_\odot/h$ and $, Comment: 22 pages, 10 figures, 2 tables. Accepted for publication in ApJ (v2). For associated emulator code, see https://github.com/SebastianBocquet/MiraTitanHMFemulator
- Published
- 2020
39. The Last Journey. I. An Extreme-Scale Simulation on the Mira Supercomputer
- Author
-
Salman Habib, Thomas D. Uram, Nicholas Frontiere, Janet Y. K. Knowles, Patricia Larsen, Joseph A. Insley, Esteban Rangel, Katrin Heitmann, Hal Finkel, Imran Sultan, Eve Kovacs, Adrian Pope, Silvio Rizzi, and Danila Korytov
- Subjects
Physics ,Cosmology and Nongalactic Astrophysics (astro-ph.CO) ,010308 nuclear & particles physics ,media_common.quotation_subject ,Pipeline (computing) ,FOS: Physical sciences ,Astronomy and Astrophysics ,Astrophysics::Cosmology and Extragalactic Astrophysics ,Tracking (particle physics) ,Supercomputer ,01 natural sciences ,Cosmology ,Computational science ,Set (abstract data type) ,Range (mathematics) ,symbols.namesake ,Space and Planetary Science ,Sky ,0103 physical sciences ,symbols ,Planck ,010303 astronomy & astrophysics ,Astrophysics - Cosmology and Nongalactic Astrophysics ,media_common - Abstract
The Last Journey is a large-volume, gravity-only, cosmological N-body simulation evolving more than 1.24 trillion particles in a periodic box with a side-length of 5.025Gpc. It was implemented using the HACC simulation and analysis framework on the BG/Q system, Mira. The cosmological parameters are chosen to be consistent with the results from the Planck satellite. A range of analysis tools have been run in situ to enable a diverse set of science projects, and at the same time, to keep the resulting data amount manageable. Analysis outputs have been generated starting at redshift z~10 to allow for construction of synthetic galaxy catalogs using a semi-analytic modeling approach in post-processing. As part of our in situ analysis pipeline we employ a new method for tracking halo sub-structures, introducing the concept of subhalo cores. The production of multi-wavelength synthetic sky maps is facilitated by generating particle lightcones in situ, also beginning at z~10. We provide an overview of the simulation set-up and the generated data products; a first set of analysis results is presented. A subset of the data is publicly available., Comment: 14 pages, 9 figures. Accepted for publication in ApJS. New visualization and new results for the matter correlation function added, minor edits. The Last Journey data products can be accessed here: https://cosmology.alcf.anl.gov/
- Published
- 2020
- Full Text
- View/download PDF
40. Scalable pCT Image Reconstruction Delivered as a Cloud Service
- Author
-
Michael E. Papka, Ian Foster, Caesar E. Ordo nez, Kyle Chard, John R. Winans, Thomas D. Uram, Nicholas T. Karonis, Ryan Chard, Kirk L. Duffin, Ravi Madduri, and Justin Fleischauer
- Subjects
Service (systems architecture) ,Computer Networks and Communications ,Computer science ,business.industry ,Real-time computing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,Cloud computing ,Provisioning ,02 engineering and technology ,Iterative reconstruction ,Supercomputer ,Computer Science Applications ,Hardware and Architecture ,Transfer (computing) ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Leverage (statistics) ,020201 artificial intelligence & image processing ,business ,Software ,Simulation ,Information Systems - Abstract
We describe a cloud-based medical image reconstruction service designed to meet a real-time and daily demand to reconstruct thousands of images from proton cancer treatment facilities worldwide. Rapid reconstruction of a three-dimensional Proton Computed Tomography (pCT) image can require the transfer of 100 GB of data and use of approximately 120 GPU-enabled compute nodes. The nature of proton therapy means that demand for such a service is sporadic and comes from potentially hundreds of clients worldwide. We thus explore the use of a commercial cloud as a scalable and cost-efficient platform for pCT reconstruction. To address the high performance requirements of this application we leverage Amazon Web Services’ GPU-enabled cluster resources that are provisioned with high performance networks between nodes. To support episodic demand, we develop an on-demand multi-user provisioning service that can dynamically provision and resize clusters based on image reconstruction requirements, priorities, and wait times. We compare the performance of our pCT reconstruction service running on commercial cloud resources with that of the same application on dedicated local high performance computing resources. We show that we can achieve scalable and on-demand reconstruction of large scale pCT images for simultaneous multi-client requests, processing images in less than 10 minutes for less than $10 per image.
- Published
- 2018
41. Adapting the serial Alpgen parton-interaction generator to simulate LHC collisions on millions of parallel threads
- Author
-
Michael E. Papka, Doug Benjamin, Thomas LeCompte, John Taylor Childers, and Thomas D. Uram
- Subjects
Luminosity (scattering theory) ,Large Hadron Collider ,010308 nuclear & particles physics ,Computer science ,Event (computing) ,Monte Carlo method ,Process (computing) ,General Physics and Astronomy ,Parton ,Parallel computing ,Supercomputer ,01 natural sciences ,Hardware and Architecture ,0103 physical sciences ,010306 general physics ,Worldwide LHC Computing Grid - Abstract
As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application and the performance that was achieved.
- Published
- 2017
42. Scaling Distributed Training of Flood-Filling Networks on HPC Infrastructure for Brain Mapping
- Author
-
Narayanan Kasthuri, Hanyu Li, Samuel Flender, Wushi Dong, Rafael Vescovi, Thomas D. Uram, Venkatram Vishwanath, Peter B. Littlewood, Nicola J. Ferrier, Elise Jennings, Corey Adams, and Murat Keçeli
- Subjects
Scheme (programming language) ,FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer science ,Image and Video Processing (eess.IV) ,Training (meteorology) ,Volume (computing) ,Inference ,Electrical Engineering and Systems Science - Image and Video Processing ,Supercomputer ,Machine Learning (cs.LG) ,Computer Science - Distributed, Parallel, and Cluster Computing ,Computer engineering ,Asynchronous communication ,Quantitative Biology - Neurons and Cognition ,FOS: Biological sciences ,Code (cryptography) ,FOS: Electrical engineering, electronic engineering, information engineering ,Neurons and Cognition (q-bio.NC) ,Distributed, Parallel, and Cluster Computing (cs.DC) ,computer ,Scaling ,computer.programming_language - Abstract
Mapping all the neurons in the brain requires automatic reconstruction of entire cells from volume electron microscopy data. The flood-filling network (FFN) architecture has demonstrated leading performance for segmenting structures from this data. However, the training of the network is computationally expensive. In order to reduce the training time, we implemented synchronous and data-parallel distributed training using the Horovod library, which is different from the asynchronous training scheme used in the published FFN code. We demonstrated that our distributed training scaled well up to 2048 Intel Knights Landing (KNL) nodes on the Theta supercomputer. Our trained models achieved similar level of inference performance, but took less training time compared to previous methods. Our study on the effects of different batch sizes on FFN training suggests ways to further improve training efficiency. Our findings on optimal learning rate and batch sizes agree with previous works., Comment: 9 pages, 10 figures
- Published
- 2019
- Full Text
- View/download PDF
43. HACC Cosmological Simulations: First Data Release
- Author
-
Katrin Heitmann, Adrian Pope, Hal Finkel, Benjamin S. Allen, Kyle Chard, Esteban Rangel, Joseph Hollowed, Patricia Larsen, Danila Korytov, Thomas D. Uram, Salman Habib, Ian Foster, and Nicholas Frontiere
- Subjects
Physics ,Service (systems architecture) ,Authentication ,Cosmology and Nongalactic Astrophysics (astro-ph.CO) ,SIMPLE (military communications protocol) ,010308 nuclear & particles physics ,FOS: Physical sciences ,Astronomy and Astrophysics ,01 natural sciences ,Data type ,Computational science ,Space and Planetary Science ,0103 physical sciences ,Range (statistics) ,Code (cryptography) ,Astrophysics - Instrumentation and Methods for Astrophysics ,010303 astronomy & astrophysics ,Data hub ,Instrumentation and Methods for Astrophysics (astro-ph.IM) ,Data transmission ,Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
We describe the first major public data release from cosmological simulations carried out with Argonne's HACC code. This initial release covers a range of datasets from large gravity-only simulations. The data products include halo information for multiple redshifts, down-sampled particles, and lightcone outputs. We provide data from two very large LCDM simulations as well as beyond-LCDM simulations spanning eleven w0-wa cosmologies. Our release platform uses Petrel, a research data service, located at the Argonne Leadership Computing Facility. Petrel offers fast data transfer mechanisms and authentication via Globus, enabling simple and efficient access to stored datasets. Easy browsing of the available data products is provided via a web portal that allows the user to navigate simulation products efficiently. The data hub will be extended by adding more types of data products and by enabling computational capabilities to allow direct interactions with simulation results., Comment: 8 pages, 5 figures. Final version published in ApJS. The HACC Simulation Data Portal can be accessed here: https://cosmology.alcf.anl.gov/
- Published
- 2019
- Full Text
- View/download PDF
44. DeepHyper: Asynchronous Hyperparameter Search for Deep Neural Networks
- Author
-
Thomas D. Uram, Venkat Vishwanath, Stefan M. Wild, Prasanna Balaprakash, and Michael A. Salim
- Subjects
Hyperparameter ,Computer science ,business.industry ,Deep learning ,Bayesian optimization ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,010305 fluids & plasmas ,Random search ,Asynchronous communication ,0103 physical sciences ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer - Abstract
Hyperparameters employed by deep learning (DL) methods play a substantial role in the performance and reliability of these methods in practice. Unfortunately, finding performance optimizing hyperparameter settings is a notoriously difficult task. Hyperparameter search methods typically have limited production-strength implementations or do not target scalability within a highly parallel machine, portability across different machines, experimental comparison between different methods, and tighter integration with workflow systems. In this paper, we present DeepHyper, a Python package that provides a common interface for the implementation and study of scalable hyperparameter search methods. It adopts the Balsam workflow system to hide the complexities of running large numbers of hyperparameter configurations in parallel on high-performance computing (HPC) systems. We implement and study asynchronous model-based search methods that consist of sampling a small number of input hyperparameter configurations and progressively fitting surrogate models over the input-output space until exhausting a user-defined budget of evaluations. We evaluate the efficacy of these methods relative to approaches such as random search, genetic algorithms, Bayesian optimization, and hyperband on DL benchmarks on CPU-and GPU-based HPC systems.
- Published
- 2018
45. Expanding the Scope of High-Performance Computing Facilities
- Author
-
Michael E. Papka and Thomas D. Uram
- Subjects
General Computer Science ,Scope (project management) ,Computer science ,business.industry ,Distributed computing ,General Engineering ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Service provider ,Supercomputer ,computer.software_genre ,01 natural sciences ,Data science ,010305 fluids & plasmas ,HPC Challenge Benchmark ,Grid computing ,Utility computing ,End-user computing ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,business ,computer - Abstract
The high-performance computing centers of the future will expand their roles as service providers, and as the machines scale up, so should the sizes of the communities they serve. National facilities must cultivate their users as much as they focus on operating machines reliably. The authors present five interrelated topic areas that are essential to expanding the value provided to those performing computational science.
- Published
- 2016
46. PDACS: A Portal for Data Analysis Services for Cosmological Simulations
- Author
-
Katrin Heitmann, Tanu Malik, Shreyas Cholia, Salman Habib, Marc Paterno, Thomas D. Uram, Jim Kowalkowski, Alex Rodriguez, Saba Sehrish, Ravi Madduri, and Ryan Chard
- Subjects
General Computer Science ,Database ,Computer science ,business.industry ,General Engineering ,computer.software_genre ,Supercomputer ,Data modeling ,Workflow ,Research community ,Analysis tools ,Software engineering ,business ,computer - Abstract
PDACS (Portal for Data Analysis Services for Cosmological Simulations) is a Web-based analysis portal that provides access to large simulations and large-scale parallel analysis tools to the research community. It provides opportunities to access, transfer, manipulate, search, and record simulation data, as well as to contribute applications and carry out (possibly complex) computational analyses of the data. PDACS also enables wrapping of analysis tools written in a large number of languages within its workflow system, providing a powerful way to carry out multilevel/multistep analyses. The system allows for cross-layer provenance tracking, implementing a transparent method for sharing workflow specifications, as well as a convenient mechanism for checking reproducibility of results generated by the workflows. Users are able to submit their own tools to the system and to share tools with the rest of the community.
- Published
- 2015
47. DESCQA: An Automated Validation Framework for Synthetic Sky Catalogs
- Author
-
K. Simon Krughoff, Rachel Mandelbaum, Yao-Yuan Mao, Salman Habib, Risa H. Wechsler, François Lanusse, Duncan Campbell, Thomas D. Uram, Andrés N. Ruiz, Andrew J. Benson, E. Paillas, Zarija Lukić, Tiziana Di Matteo, Rongpu Zhou, Adrian Pope, Ananth Tenneti, Cristian A Vega-Martínez, Sofía A. Cora, Nelson Padilla, Paul M. Ricker, Katrin Heitmann, Jeffrey A. Newman, Ying Zu, Andrew P. Hearin, Eve Kovacs, J. Bryce Kalmbach, and Joseph DeRose
- Subjects
Ciencias Astronómicas ,Cosmology and Nongalactic Astrophysics (astro-ph.CO) ,Interface (Java) ,Ciencias Físicas ,NUMERICAL [METHODS] ,media_common.quotation_subject ,Large-scale structure of universe ,FOS: Physical sciences ,Large Synoptic Survey Telescope ,Astrophysics::Cosmology and Extragalactic Astrophysics ,Astronomy & Astrophysics ,Physical Chemistry ,Atomic ,01 natural sciences ,010305 fluids & plasmas ,purl.org/becyt/ford/1 [https] ,Set (abstract data type) ,Particle and Plasma Physics ,0103 physical sciences ,Nuclear ,Quality (business) ,Astronomical And Space Sciences ,010303 astronomy & astrophysics ,Instrumentation and Methods for Astrophysics (astro-ph.IM) ,media_common ,Physics ,Information retrieval ,Organic Chemistry ,Astrophysics::Instrumentation and Methods for Astrophysics ,Molecular ,Astronomy and Astrophysics ,purl.org/becyt/ford/1.3 [https] ,Pipeline (software) ,Astronomía ,Space and Planetary Science ,Sky ,Fundamental physics ,astro-ph.CO ,LARGE-SCALE STRUCTURE OF UNIVERSE ,Astrophysics - Instrumentation and Methods for Astrophysics ,CIENCIAS NATURALES Y EXACTAS ,Astrophysics - Cosmology and Nongalactic Astrophysics ,Numerical analysis ,astro-ph.IM ,Physical Chemistry (incl. Structural) - Abstract
The use of high-quality simulated sky catalogs is essential for the success of cosmological surveys. The catalogs have diverse applications, such as investigating signatures of fundamental physics in cosmological observables, understanding the effect of systematic uncertainties on measured signals and testing mitigation strategies for reducing these uncertainties, aiding analysis pipeline development and testing, and survey strategy optimization. The list of applications is growing with improvements in the quality of the catalogs and the details that they can provide. Given the importance of simulated catalogs, it is critical to provide rigorous validation protocols that enable both catalog providers and users to assess the quality of the catalogs in a straightforward and comprehensive way. For this purpose, we have developed the DESCQA framework for the Large Synoptic Survey Telescope Dark Energy Science Collaboration as well as for the broader community. The goal of DESCQA is to enable the inspection, validation, and comparison of an inhomogeneous set of synthetic catalogs via the provision of a common interface within an automated framework. In this paper, we present the design concept and first implementation of DESCQA. In order to establish and demonstrate its full functionality we use a set of interim catalogs and validation tests. We highlight several important aspects, both technical and scientific, that require thoughtful consideration when designing a validation framework, including validation metrics and how these metrics impose requirements on the synthetic sky catalogs., La lista completa de autores se encuentra en el documento., Instituto de Astrofísica de La Plata, Facultad de Ciencias Astronómicas y Geofísicas
- Published
- 2017
- Full Text
- View/download PDF
48. Bayesian Causalities, Mappings, and Phylogenies: A Social Science Gateway for Modeling Ethnographic, Archaeological, Historical Ecological, and Biological Variables
- Author
-
Lukasz Lacinski, Stuart Martin, Wesley Roberts, Feng Ren, Paul Rodriguez, Douglas R. White, Thomas D. Uram, Tolga Oztan, and Eric Blau
- Subjects
Variables ,Ecology ,media_common.quotation_subject ,Autocorrelation ,Bayesian probability ,Bayesian network ,Conditional probability table ,Missing data ,Archaeology ,Measure (mathematics) ,Geography ,Ordinary least squares ,Social science ,media_common - Abstract
Extending the innovative “Def Wy” procedures for modeling evolutionary network effects (Dow, Cross-Cult Res 41:336–363, 2007; Dow and Eff, Cross-Cult Res 43:134–151, 2009; Dow and Eff, Cross-Cult Res 43:206–229, 2009), a Complex Social Science http://intersci.ss.uci.edu (CoSSci) Gateway was developed to provide complex analyses of ethnographic, archaeological, historical, ecological, and biological datasets with easy open access. Analysis begins with dependent variable y with n observations and X independent and other variables, and imputes missing data for all variates. Several (n × n) W* matrices measure evolutionary network effects such as diffusion or phylogenetic ancestries. W* is row-normalized to sum to 1 and combined to obtain a W, multiplied by X as WX, and allowing X and y multiplication by W: $$ \overset{.}{W}y={\overset{.}{\alpha}}_0+{\overset{.}{\alpha}}_i\;\left(W{X}_{i=1,\;n}\right). $$ Wy measures the evolutionary autocorrelation portion of y discounting evolutionary effects of propinquity and phylogenetics. Tested for exogeneity (error terms uncorrelated with Wy or independent variables) the two-stage Ordinary Least Squares (OLS) results include measures of independent variable and deep evolutionary autocorrelation predictors. We show how these methods apply to a wide variety of problems in the social sciences to which ecological and biological variables will apply once contributed.
- Published
- 2016
49. The Outer Rim Simulation: A Path to Many-core Supercomputers
- Author
-
Katrin Heitmann, Nicholas Frontiere, Joseph A. Insley, Esteban Rangel, Samuel Flender, Thomas D. Uram, Vitali Morozov, Hal Finkel, Hillary L. Child, Silvio Rizzi, Danila Korytov, Salman Habib, and Adrian Pope
- Subjects
Physics ,Cosmology and Nongalactic Astrophysics (astro-ph.CO) ,Data curation ,010308 nuclear & particles physics ,Group method of data handling ,Volume (computing) ,FOS: Physical sciences ,Astronomy and Astrophysics ,Supercomputer ,01 natural sciences ,Computational science ,Set (abstract data type) ,Space and Planetary Science ,0103 physical sciences ,Path (graph theory) ,Code (cryptography) ,Halo ,010303 astronomy & astrophysics ,Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
We describe the Outer Rim cosmological simulation, one of the largest high-resolution N-body simulations performed to date, aimed at promoting science to be carried out with large-scale structure surveys. The simulation covers a volume of (4.225Gpc)^3 and evolves more than one trillion particles. It was executed on Mira, a BlueGene/Q system at the Argonne Leadership Computing Facility. We discuss some of the computational challenges posed by a system like Mira, a many-core supercomputer, and how the simulation code, HACC, has been designed to overcome these challenges. We have carried out a large range of analyses on the simulation data and we report on the results as well as the data products that have been generated. The full data set generated by the simulation totals more than 5PB of data, making data curation and data handling a large challenge in of itself. The simulation results have been used to generate synthetic catalogs for large-scale structure surveys, including DESI and eBOSS, as well as CMB experiments. A detailed catalog for the LSST DESC data challenges has been created as well. We publicly release some of the Outer Rim halo catalogs, downsampled particle information, and lightcone data., Comment: 10 pages, 10 figures. Submitted to ApJS. The Outer Rim data products can be accessed here: https://cosmology.alcf.anl.gov/
- Published
- 2019
50. Distributed and hardware accelerated computing for clinical medical imaging using proton computed tomography (pCT)
- Author
-
Michael E. Papka, George Coutrakon, Nicholas T. Karonis, Caesar E. Ordoñez, Eric C. Olson, Thomas D. Uram, Kirk L. Duffin, and Bela Erdelyi
- Subjects
Proton computed tomography ,Modality (human–computer interaction) ,medicine.diagnostic_test ,Computer Networks and Communications ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Computed tomography ,Iterative reconstruction ,Theoretical Computer Science ,Computational science ,CUDA ,Artificial Intelligence ,Hardware and Architecture ,Computer graphics (images) ,Computer cluster ,medicine ,Medical imaging ,Proton therapy ,Software - Abstract
Proton computed tomography (pCT) is an imaging modality that has been in development to support targeted dose delivery in proton therapy. It aims to accurately map the distribution of relative stopping power. Because protons traverse material media in non-linear paths, pCT requires individual proton processing. Image reconstruction then becomes a time-consuming process. Clinical-use scenarios that require images from billions of protons in less than ten or fifteen minutes have motivated us to use distributed and hardware-accelerated computing methods to achieve fast image reconstruction. Combined use of MPI and GPUs demonstrates that clinically viable image reconstruction is possible. On a 60-node CPU/GPU computer cluster, we achieved efficient strong and weak scaling when reconstructing images from two billion histories in under seven minutes. This represents a significant improvement over the previous state-of-the-art in pCT, which took almost seventy minutes to reconstruct an image from 131 million histories on a single-CPU, single-GPU computer.
- Published
- 2013
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.