Search

Your search keyword '"Brian Van Essen"' showing total 38 results

Search Constraints

Start Over You searched for: Author "Brian Van Essen" Remove constraint Author: "Brian Van Essen" Search Limiters Full Text Remove constraint Search Limiters: Full Text
38 results on '"Brian Van Essen"'

Search Results

1. CANDLE/Supervisor: a workflow framework for machine learning applied to cancer research

2. REMODEL: Rethinking Deep CNN Models to Detect and Count on a NeuroSynaptic System

3. Machine Learning-Driven Multiscale Modeling: Bridging the Scales with a Next-Generation Simulation Infrastructure

4. Co-design Center for Exascale Machine Learning Technologies (ExaLearn)

5. Scalable Composition and Analysis Techniques for Massive Scientific Workflows

6. Enabling rapid COVID-19 small molecule drug design through scalable deep learning of generative models

7. Machine-learning-based dynamic-importance sampling for adaptive multiscale simulations

8. Machine learning–driven multiscale modeling reveals lipid-dependent dynamics of RAS signaling proteins

9. Monitoring Large Scale Supercomputers: A Case Study with the Lassen Supercomputer

10. Scalable Topological Data Analysis and Visualization for Evaluating Data-Driven Models in Scientific Applications

11. Is Disaggregation possible for HPC Cognitive Simulation?

12. The Case for Strong Scaling in Deep Learning: Training Large 3D CNNs with Hybrid Parallelism

13. Modular Spiking Neural Circuits for Mapping Long Short-Term Memory on a Neurosynaptic Processor

14. Channel and filter parallelism for large-scale CNN training

15. Preparation and optimization of a diverse workload for a large-scale heterogeneous system

16. Parallelizing Training of Deep Generative Models on Massive Scientific Datasets

17. A State-of-the-Art Survey on Deep Learning Theory and Architectures

18. REMODEL: Rethinking Deep CNN Models to Detect and Count on a NeuroSynaptic System

19. Enabling Machine Learning-Ready HPC Ensembles with Merlin

20. Improving Strong-Scaling of CNN Training by Exploiting Finer-Grained Parallelism

22. Extreme Heterogeneity 2018 - Productive Computational Science in the Era of Extreme Heterogeneity: Report for DOE ASCR Workshop on Extreme Heterogeneity

23. Aluminum: An Asynchronous, GPU-Aware Communication Library Optimized for Large-Scale Training of Deep Neural Networks on HPC Systems

24. Effective Quantization Approaches for Recurrent Neural Networks

25. Extreme Heterogeneity 2018: Productive Computational Science in the Era of Extreme Heterogeneity Report for DOE ASCR Basic Research Needs Workshop on Extreme Heterogeneity January 23–25, 2018

26. Towards Scalable Parallel Training of Deep Neural Networks

27. Argo NodeOS: Toward Unified Resource Management for Exascale

28. DI-MMAP—a scalable memory-map runtime for out-of-core data-intensive applications

30. Towards a Distributed Large-Scale Dynamic Graph Data Store

31. LBANN

32. Multi-threaded streamline tracing for data-intensive architectures

33. DI-MMAP: A High Performance Memory-Map Runtime for Data-Intensive Applications

34. Integrated in-system storage architecture for high performance computing

35. Accelerating a Random Forest Classifier: Multi-Core, GP-GPU, or FPGA?

36. Energy-efficient specialization of functional units in a coarse-grained reconfigurable array

37. Managing Short-Lived and Long-Lived Values in Coarse-Grained Reconfigurable Arrays

38. Static versus scheduled interconnect in Coarse-Grained Reconfigurable Arrays

Catalog

Books, media, physical & digital resources