149 results on '"Derek Groen"'
Search Results
102. A parallel gravitational N-body kernel
- Author
-
Simon Portegies Zwart, Steve McMillan, Derek Groen, Alessia Gualandris, Michael Sipior, and Willem Vermin
- Published
- 2007
103. Computational Science – ICCS 2022 : 22nd International Conference, London, UK, June 21–23, 2022, Proceedings, Part II
- Author
-
Derek Groen, Clélia de Mulatier, Maciej Paszynski, Valeria V. Krzhizhanovskaya, Jack J. Dongarra, Peter M. A. Sloot, Derek Groen, Clélia de Mulatier, Maciej Paszynski, Valeria V. Krzhizhanovskaya, Jack J. Dongarra, and Peter M. A. Sloot
- Subjects
- Computational complexity--Congresses, Computer science--Congresses
- Abstract
The four-volume set LNCS 13350, 13351, 13352, and 13353 constitutes the proceedings of the 22ndt International Conference on Computational Science, ICCS 2022, held in London, UK, in June 2022.• The total of 175 full papers and 78 short papers presented in this book set were carefully reviewed and selected from 474 submissions. 169 full and 36 short papers were accepted to the main track; 120 full and 42 short papers were accepted to the workshops/ thematic tracks. •The conference was held in a hybrid format
- Published
- 2022
104. Multiscale computing in the exascale era
- Author
-
Derek Groen, Alfons G. Hoekstra, Peter V. Coveney, Saad Alowayyed, and Computational Science Lab (IVI, FNWI)
- Subjects
FOS: Computer and information sciences ,General Computer Science ,ComputerSystemsOrganization_COMPUTERSYSTEMIMPLEMENTATION ,Computer science ,Distributed computing ,MathematicsofComputing_NUMERICALANALYSIS ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Parallel computing ,01 natural sciences ,010305 fluids & plasmas ,Theoretical Computer Science ,Multiscale modelling ,Software ,Multiscale computing ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,ComputingMethodologies_COMPUTERGRAPHICS ,business.industry ,Separation of concerns ,Fault tolerance ,Supercomputer ,Exascale ,ComputingMethodologies_PATTERNRECOGNITION ,Computer Science - Distributed, Parallel, and Cluster Computing ,Modeling and Simulation ,020201 artificial intelligence & image processing ,Distributed, Parallel, and Cluster Computing (cs.DC) ,High performance computing ,business ,Energy (signal processing) - Abstract
We expect that multiscale simulations will be one of the main high performance computing workloads in the exascale era. We propose multiscale computing patterns as a generic vehicle to realise load balanced, fault tolerant and energy aware high performance multiscale computing. Multiscale computing patterns should lead to a separation of concerns, whereby application developers can compose multiscale models and execute multiscale simulations, while pattern software realises optimized, fault tolerant and energy aware multiscale computing. We introduce three multiscale computing patterns, present an example of the extreme scaling pattern, and discuss our vision of how this may shape multiscale computing in the exascale era.
- Published
- 2017
- Full Text
- View/download PDF
105. A Serious Video Game To Support Decision Making On Refugee Aid Deployment Policy
- Author
-
Jose Emmanuel Ramirez-Marquez, Luis Eduardo Perez Estrada, and Derek Groen
- Subjects
0301 basic medicine ,Interface (Java) ,Computer science ,Refugee ,Computer security ,computer.software_genre ,Facility location problem ,03 medical and health sciences ,Subject-matter expert ,030104 developmental biology ,Action (philosophy) ,Software deployment ,General Earth and Planetary Sciences ,Resource allocation ,Video game ,computer ,General Environmental Science - Abstract
The success of refugee support operations depends on the ability of humanitarian organizations and governments to deploy aid effectively. These operations require that decisions on resource allocation are made as quickly as possible in order to respond to urgent crises and, by anticipating future developments, remain adequate as the situation evolves. Agent-based modeling and simulation has been used to understand the progression of past refugee crises, as well as a way to predict how new ones will unfold. In this work, we tackle the problem of refugee aid deployment as a variant of the Robust Facility Location Problem (RFLP). We present a serious video game that functions as an interface for an agent-based simulation run with data from past refugee crises. Having obtained good approximate solutions to the RFLP by implementing a game that frames the problem as a puzzle, we adapted its mechanics and interface to correspond to refugee situations. The game is intended to be played by both subject matter experts and the general public, as a way to crowd-source effective courses of action in these situations.
- Published
- 2017
- Full Text
- View/download PDF
106. Building Global Research Capacity in Public Health: The Case of a Science Gateway for Physical Activity Lifelong Modelling and Simulation
- Author
-
Riccardo Bruno, Simon J. E. Taylor, Diana Suleimenova, Nana Anokye, Anastasia Anagnostou, Roberto Barbera, and Derek Groen
- Subjects
Sustainable development ,Open science ,medicine.medical_specialty ,Knowledge management ,business.industry ,Public health ,Physical activity ,030204 cardiovascular system & hematology ,Risk factor (computing) ,Science gateway ,03 medical and health sciences ,0302 clinical medicine ,Quality of life (healthcare) ,Research capacity ,medicine ,030212 general & internal medicine ,Business - Abstract
Physical inactivity is a major risk factor for non-communicable disease and has a negative impact on quality of life in both high and low- and middle- income countries (LMICs). Increasing levels of physical activity is recognized as a strategic pathway to achieving the UN’s 2030 Sustainable Development Goals. Research can support policy makers in evaluating strategies for achieving this goal. Barriers limit the capacity of researchers in LMICs. We discuss how global research capacity might be developed in public health by supporting collaboration via Open Science approaches and technologies such as Science Gateways and Open Access Repositories. The paper reports on how we are contributing to research capacity building in Ghana using a Science Gateway for our PALMS (Physical Activity Lifelong Modelling & Simulation) agent-based micro-simulation that we developed in the UK, and how we use an Open Access Repository to share the outputs of the research.
- Published
- 2019
- Full Text
- View/download PDF
107. Mastering the scales: a survey on the benefits of multiscale computing software
- Author
-
Lourens Veen, Kenneth W. Leiter, Jaroslaw Knap, Diana Suleimenova, Philipp Neumann, and Derek Groen
- Subjects
Computer science ,General Mathematics ,Science and engineering ,General Physics and Astronomy ,02 engineering and technology ,Review Article ,01 natural sciences ,010305 fluids & plasmas ,Software ,0103 physical sciences ,business.industry ,General Engineering ,high-performance computing ,Usability ,Articles ,021001 nanoscience & nanotechnology ,Supercomputer ,Data science ,multiscale simulation ,usability ,Dominance (economics) ,multiscale computing ,multiscale modelling ,0210 nano-technology ,business - Abstract
Electronic supplementary material is available online at https://doi.org/10.6084/m9. figshare.c.4352660. © 2019 The Authors. In the last few decades, multiscale modeling has emerged as one of the dominant modeling paradigms in many areas of science and engineering. Its rise to dominance is primarily driven by advancements in computing power and the need to model systems of increasing complexity. The multiscale modeling paradigm is now accompanied by a vibrant ecosystem of multiscale computing software (MCS) which promise to address many challenges in the development of multiscale applications. In this paper, we define the common steps in the multiscale application development process and investigate to what degree a set of 22 representative MCS tools enhance each development step. We observe several gaps in the features provided by MCS tools, specially for application deployment and the preparation and management of production runs. In addition, we find that many MCS tools are tailored to a particular multiscale computing pattern, even though they are otherwise application agnostic. We conclude that the gaps we identify are characteristic of a field that is still maturing and features that enhance the deployment and production steps of multiscale application development are desirable for the long term success of MCS in its application fields. The European Union’s Horizon 2020 research, innovation programme under grant agreement and the project “Task-based load balancing and auto-tuning in particle simulations” European Union’s Horizon 2020 Research and Innovation Programme under grant agreement nos. 800925 and 671564; ‘Task-based load balancing and auto-tuning in particle simulations’ project (TaLPas), grant no. 01IH16008B.
- Published
- 2019
108. A coupled food security and refugee movement model for the south Sudan conflict
- Author
-
Diana Suleimenova, Derek Groen, and Christian Vanhille Campos
- Subjects
0303 health sciences ,education.field_of_study ,Food security ,010504 meteorology & atmospheric sciences ,Public economics ,As is ,Refugee ,Population ,Data-driven simulation ,01 natural sciences ,Test (assessment) ,Agent-based modelling ,03 medical and health sciences ,Forced migration ,Multiscale modelling ,Forced displacement ,Political science ,Correlation analysis ,Set (psychology) ,education ,030304 developmental biology ,0105 earth and related environmental sciences - Abstract
We investigate, through data sets correlation analysis, how relevant to the simulation of refugee dynamics the food situation is. Armed conflicts often imply difficult food access conditions for the population, which can have a great impact on the behaviour of the refugees, as is the case in South Sudan. To test our approach, we adopt the Flee agent-based simulation code, combining it with a data-driven food security model to enhance the rule set for determining refugee movements. We test two different approaches for South Sudan and find promising yet negative results. While our first approach to modelling refugees response to food insecurity do not improve the error of the simulation development approach, we show that this behaviour is highly non-trivial and properly understanding it could determine the development of reliable models of refugee dynamics.
- Published
- 2019
109. Patterns for High Performance Multiscale Computing
- Author
-
Derek Groen, Vytautas Jancauskas, Saad Alowayyed, Peter V. Coveney, Piotr Kopta, Bartosz Bosak, Oliver Perks, O. O. Luk, Tomasz Piontek, O. Hoenen, James L. Suter, Krzysztof Kurowski, Alfons G. Hoekstra, Keeran Brabazon, D. P. Coster, and Computational Science Lab (IVI, FNWI)
- Subjects
Modelling methodology ,Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,02 engineering and technology ,Load balancing (computing) ,Supercomputer ,Model coupling ,Multiscale computing ,Software ,Hardware and Architecture ,Middleware ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,Leverage (statistics) ,020201 artificial intelligence & image processing ,High performance computing ,business - Abstract
We describe our Multiscale Computing Patterns software for High Performance Multiscale Computing. Following a short review of Multiscale Computing Patterns, this paper introduces the Multiscale Computing Patterns Software, which consists of description, optimisation and execution components. First, the description component translates the task graph, representing a multiscale simulation, to a particular type of multiscale computing pattern. Second, the optimisation component selects and applies algorithms to find the most suitable mapping between submodels and available HPC resources. Third, the execution component which a middleware layer maps submodels to the number and type of physical resources based on the suggestions emanating from the optimisation part together with infrastructure-specific metrics such as queueing time and resource availability. The main purpose of the Multiscale Computing Patterns software is to leverage the Multiscale Computing Patterns to simplify and automate the execution of complex multiscale simulations on high performance computers, and to provide both application-specific and pattern-specific performance optimisation. We test the performance and the resource usage for three multiscale models, which are expressed in terms of two Multiscale Computing Patterns. In doing so, we demonstrate how the software automates resource selection and load balancing, and delivers performance benefits from both the end-user and the HPC system level perspectives.
- Published
- 2019
110. A generalized simulation development approach for predicting refugee destinations
- Author
-
Diana Suleimenova, Derek Groen, and David Bell
- Subjects
Operations research ,Databases, Factual ,Refugee ,0211 other engineering and technologies ,lcsh:Medicine ,Distribution (economics) ,02 engineering and technology ,Destinations ,Article ,050602 political science & public administration ,Medicine ,Humans ,Computer Simulation ,lcsh:Science ,Set (psychology) ,021110 strategic, defence & security studies ,Refugees ,Multidisciplinary ,business.industry ,lcsh:R ,05 social sciences ,Models, Theoretical ,0506 political science ,Forced migration ,Africa ,lcsh:Q ,Construct (philosophy) ,business ,Algorithms - Abstract
In recent years, global forced displacement has reached record levels, with 22.5 million refugees worldwide. Forecasting refugee movements is important, as accurate predictions can help save refugee lives by allowing governments and NGOs to conduct a better informed allocation of humanitarian resources. Here, we propose a generalized simulation development approach to predict the destinations of refugee movements in conflict regions. In this approach, we synthesize data from UNHCR, ACLED and Bing Maps to construct agent-based simulations of refugee movements. We apply our approach to develop, run and validate refugee movement simulations set in three major African conflicts, estimating the distribution of incoming refugees across destination camps, given the expected total number of refugees in the conflict. Our simulations consistently predict more than 75% of the refugee destinations correctly after the first 12 days, and consistently outperform alternative naive forecasting techniques. Using our approach, we are also able to reproduce key trends in refugee arrival rates found in the UNHCR data.
- Published
- 2017
111. Distributed multiscale computing with MUSCLE 2, the Multiscale Coupling Library and Environment
- Author
-
Krzysztof Kurowski, Alfons G. Hoekstra, Derek Groen, Mariusz Mamonski, Bastien Chopard, M. Ben Belgacem, Bartosz Bosak, Joris Borgdorff, Peter V. Coveney, and Computational Science Lab (IVI, FNWI)
- Subjects
FOS: Computer and information sciences ,General Computer Science ,Java ,Modeling language ,Computer science ,Fortran ,Message Passing Interface ,02 engineering and technology ,Parallel computing ,GridFTP ,Software_PROGRAMMINGTECHNIQUES ,01 natural sciences ,010305 fluids & plasmas ,Theoretical Computer Science ,Computational Engineering, Finance, and Science (cs.CE) ,Multiscale modelling ,Model coupling ,Modelling and Simulation ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,ddc:025.063 ,Computer Science - Computational Engineering, Finance, and Science ,computer.programming_language ,Computer Science - Performance ,Execution environment ,Distributed multiscale computing ,Python (programming language) ,MUSCLE ,Canal system ,Performance (cs.PF) ,Computer Science - Distributed, Parallel, and Cluster Computing ,Multiscale coupling ,Modeling and Simulation ,020201 artificial intelligence & image processing ,Distributed, Parallel, and Cluster Computing (cs.DC) ,computer ,Computer Science(all) - Abstract
We present the Multiscale Coupling Library and Environment: MUSCLE 2. This multiscale component-based execution environment has a simple to use Java, C++, C, Python and Fortran API, compatible with MPI, OpenMP and threading codes. We demonstrate its local and distributed computing capabilities and compare its performance to MUSCLE 1, file copy, MPI, MPWide, and GridFTP. The local throughput of MPI is about two times higher, so very tightly coupled code should use MPI as a single submodel of MUSCLE 2; the distributed performance of GridFTP is lower, especially for small messages. We test the performance of a canal system model with MUSCLE 2, where it introduces an overhead as small as 5% compared to MPI., Comment: 18 pages, 22 figures, submitted to journal
- Published
- 2014
112. Support for Multiscale Simulations with Molecular Dynamics
- Author
-
Marian Bubak, Daniel Harezlak, Eryk Ciepiela, Stefan J. Zasada, Grzegorz Dyk, James J. Suter, Peter V. Coveney, Derek Groen, Tomasz Gubała, Maciej Pawlik, and Katarzyna Rycerz
- Subjects
SIMPLE (military communications protocol) ,Computer science ,business.industry ,Distributed computing ,020206 networking & telecommunications ,02 engineering and technology ,Construct (python library) ,021001 nanoscience & nanotechnology ,computer.software_genre ,distributed multiscale simulations ,Scripting language ,tools ,0202 electrical engineering, electronic engineering, information engineering ,General Earth and Planetary Sciences ,Data mining ,0210 nano-technology ,business ,e-infrastructures ,computer ,General Environmental Science ,Graphical user interface - Abstract
We present a reusable solution that supports users in combining single-scale models to create a multiscale application. Our approach applies several multiscale programming tools to allow users to compose multiscale applications using a graphical interface, and provides an easy way to execute these multiscale applications on international production infrastructures. Our solution extends the general purpose scripting approach of the GridSpace platform with simple mechanisms for accessing production resources, provided by the Application Hosting Environment (AHE). We apply our support solution to construct and execute a multiscale simulation of clay-polymer nanocomposite materials, and showcase its benefit in reducing the effort required to do a number of time-intensive user tasks.
- Published
- 2013
- Full Text
- View/download PDF
113. Ten simple rules for effective computational research
- Author
-
Neil Dalchau, Maria Bruna, Stephen Emmott, Joe Pitt-Francis, Christian A. Yates, Jonathan Cooper, David J. Gavaghan, Sara-Jane Dunn, Biswa Sengupta, David W. Wright, Miguel O. Bernabeu, Ben Calderhead, Greg J. McInerny, James M. Osborne, Bernhard Knapp, Derek Groen, Gary R. Mirams, Robin Freeman, Alexander G. Fletcher, and Charlotte M. Deane
- Subjects
Computer and Information Sciences ,QH301-705.5 ,Computer science ,media_common.quotation_subject ,Best practice ,Ecology (disciplines) ,Computational scientist ,Cellular and Molecular Neuroscience ,Software ,Multidisciplinary approach ,Reading (process) ,Genetics ,Computer Simulation ,Biology (General) ,Molecular Biology ,Ecology, Evolution, Behavior and Systematics ,media_common ,Models, Statistical ,Ecology ,Computers ,business.industry ,Software development ,Biology and Life Sciences ,Computational Biology ,Software Engineering ,Data science ,Editorial ,Computational Theory and Mathematics ,Modeling and Simulation ,business ,Range (computer programming) - Abstract
In order to attempt to understand the complexity inherent in nature, mathematical, statistical and computational techniques are increasingly being employed in the life sciences. In particular, the use and development of software tools is becoming vital for investigating scientific hypotheses, and a wide range of scientists are finding software development playing a more central role in their day-to-day research. In fields such as biology and ecology, there has been a noticeable trend towards the use of quantitative methods for both making sense of ever-increasing amounts of data [1] and building or selecting models [2]. As Research Fellows of the “2020 Science” project (http://www.2020science.net), funded jointly by the EPSRC (Engineering and Physical Sciences Research Council) and Microsoft Research, we have firsthand experience of the challenges associated with carrying out multidisciplinary computation-based science [3]–[5]. In this paper we offer a jargon-free guide to best practice when developing and using software for scientific research. While many guides to software development exist, they are often aimed at computer scientists [6] or concentrate on large open-source projects [7]; the present guide is aimed specifically at the vast majority of scientific researchers: those without formal training in computer science. We present our ten simple rules with the aim of enabling scientists to be more effective in undertaking research and therefore maximise the impact of this research within the scientific community. While these rules are described individually, collectively they form a single vision for how to approach the practical side of computational science. Our rules are presented in roughly the chronological order in which they should be undertaken, beginning with things that, as a computational scientist, you should do before you even think about writing any code. For each rule, guides on getting started, links to relevant tutorials, and further reading are provided in the supplementary material (Text S1).
- Published
- 2016
- Full Text
- View/download PDF
114. Multiscale modelling and simulation, 13th international workshop
- Author
-
Alfons G. Hoekstra, Derek Groen, Timothy D. Scheibe, Bartosz Bosak, Valeria V. Krzhizhanovskaya, and Computational Science Lab (IVI, FNWI)
- Subjects
Multiscale ,Computer science ,Management science ,Multiphysics ,simulation ,Multiscale modeling ,Modelling ,Bridging (programming) ,modelling ,Scientific analysis ,Coupling ,multiphysics ,multiscale ,Multidisciplinary approach ,Software deployment ,Systems engineering ,General Earth and Planetary Sciences ,coupling ,Simulation ,General Environmental Science - Abstract
Multiscale Modelling and Simulation (MMS) is a cornerstone in the today's research in computational science. Simulations containing multiple models, with each model operating at a different temporal or spatial scale, are a challenging setting that frequently require innovative approaches in areas such as scale bridging, code deployment, error quantification, and scientific analysis. The aim of the MMS workshop is to encourage and consolidate the progress in this multidisciplinary research field, both in the areas of the scientific applications and the underlying infrastructures that enable these applications. Here we briefly introduce the scope of the workshop and highlight some of the key aspects of this year's submissions.
- Published
- 2016
115. Multiscale Modelling and Simulation, 14th International Workshop
- Author
-
Alfons G. Hoekstra, Petros Koumoutsakos, Derek Groen, Bartosz Bosak, Valeria V. Krzhizhanovskaya, and Computational Science Lab (IVI, FNWI)
- Subjects
Scientific analysis ,Software deployment ,Computer science ,Multiphysics ,Systems engineering ,General Earth and Planetary Sciences ,General Environmental Science ,Bridging (programming) - Abstract
Multiscale Modelling and Simulation (MMS) is a cornerstone in today’s research in computational science. Simulations containing multiple models, with each model operating at a different temporal or spatial scale, are a challenging setting that frequently require innovative approaches in areas such as scale bridging, code deployment, error quantification, and scientific analysis. The aim of the MMS workshop is to encourage and consolidate the progress in this multi-disciplinary research field, both in the areas of the scientific applications and the underlying infrastructures that enable these applications. Here we briefly introduce the scope of the workshop and highlight some of the key aspects of this year’s submissions.
- Published
- 2017
116. Impact of immigrants on a multi-agent economical system
- Author
-
Ranaivo Mahaleo Razakanirina, Bastien Chopard, Léa Claire Kaufmann, and Derek Groen
- Subjects
Employment ,Computer and Information Sciences ,Labor markets ,Economics ,media_common.quotation_subject ,Immigration ,lcsh:Medicine ,Social Sciences ,Emigrants and Immigrants ,Economics of migration ,0102 computer and information sciences ,Research and Analysis Methods ,Systems Science ,01 natural sciences ,Social systems ,Microeconomics ,Agent-Based Modeling ,Sociology ,0103 physical sciences ,Salaries ,Market price ,Humans ,ddc:025.063 ,lcsh:Science ,010306 general physics ,media_common ,Human Capital ,Multidisciplinary ,Simulation and Modeling ,lcsh:R ,Economic agents ,Labor Markets ,Emigration and Immigration ,Simulation and modeling ,Economic Agents ,Agent-based modeling ,010201 computation theory & mathematics ,Social system ,Labor Economics ,Physical Sciences ,Social Systems ,lcsh:Q ,Economics of Migration ,Mathematics ,Algorithms ,Models, Econometric ,Research Article - Abstract
© 2018 Kaufmann et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. We consider a multi-agent model of a simple economical system and study the impacts of a wave of immigrants on the stability of the system. Our model couples a labor market with a goods market. We first create a stable economy with N agents and study the impact of adding n new workers in the system. The time to reach a new equilibrium market is found to obey a power law in n. The new wages and market prices are observed to decrease as 1/n, whereas the wealth of agents remains unchanged.
- Published
- 2018
- Full Text
- View/download PDF
117. The living application: A self-organizing system for complex grid tasks
- Author
-
S. Portegies Zwart, Stefan Harfst, Derek Groen, and Computational Science Lab (IVI, FNWI)
- Subjects
FOS: Computer and information sciences ,Computer science ,Distributed computing ,FOS: Physical sciences ,02 engineering and technology ,01 natural sciences ,Theoretical Computer Science ,Computer Science - Networking and Internet Architecture ,Resource (project management) ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,C.2.4 ,010303 astronomy & astrophysics ,Networking and Internet Architecture (cs.NI) ,Solver ,Grid ,Astrophysics - Astrophysics of Galaxies ,Range (mathematics) ,Workflow ,Computer Science - Distributed, Parallel, and Cluster Computing ,Hardware and Architecture ,Astrophysics of Galaxies (astro-ph.GA) ,020201 artificial intelligence & image processing ,Distributed, Parallel, and Cluster Computing (cs.DC) ,State (computer science) ,Software - Abstract
We present the living application, a method to autonomously manage applications on the grid. During its execution on the grid, the living application makes choices on the resources to use in order to complete its tasks. These choices can be based on the internal state, or on autonomously acquired knowledge from external sensors. By giving limited user capabilities to a living application, the living application is able to port itself from one resource topology to another. The application performs these actions at run-time without depending on users or external workflow tools. We demonstrate this new concept in a special case of a living application: the living simulation. Today, many simulations require a wide range of numerical solvers and run most efficiently if specialized nodes are matched to the solvers. The idea of the living simulation is that it decides itself which grid machines to use based on the numerical solver currently in use. In this paper we apply the living simulation to modelling the collision between two galaxies in a test setup with two specialized computers. This simulation switces at run-time between a GPU-enabled computer in the Netherlands and a GRAPE-enabled machine that resides in the United States, using an oct-tree N-body code whenever it runs in the Netherlands and a direct N-body solver in the United States., 26 pages, 3 figures, accepted by IJHPCA
- Published
- 2010
118. A multiphysics and multiscale software environment for modeling astrophysical systems
- Author
-
Simon Portegies Zwart, Steve McMillan, Stefan Harfst, Derek Groen, Michiko Fujii, Breanndán Ó Nualláin, Evert Glebbeek, Douglas Heggie, James Lombardi, Piet Hut, Vangelis Angelou, Sambaran Banerjee, Houria Belkus, Tassos Fragos, John Fregeau, Evghenii Gaburov, Rob Izzard, Mario Jurić, Stephen Justham, Andrea Sottoriva, Peter Teuben, Joris van Bever, Ofer Yaron, Marcel Zemp, and High Energy Astrophys. & Astropart. Phys (API, FNWI)
- Subjects
010308 nuclear & particles physics ,Space and Planetary Science ,Astrophysics (astro-ph) ,0103 physical sciences ,FOS: Physical sciences ,Computer Science::Programming Languages ,Astronomy and Astrophysics ,Astrophysics::Cosmology and Extragalactic Astrophysics ,Astrophysics ,010303 astronomy & astrophysics ,01 natural sciences ,Instrumentation ,Astrophysics::Galaxy Astrophysics - Abstract
We present MUSE, a software framework for combining existing computational tools for different astrophysical domains into a single multiphysics, multiscale application. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying an interface between each module and the framework that represents a balance between generality and computational efficiency. This approach allows scientists to use combinations of codes to solve highly-coupled problems without the need to write new codes for other domains or significantly alter their existing codes. MUSE currently incorporates the domains of stellar dynamics, stellar evolution and stellar hydrodynamics for studying generalized stellar systems. We have now reached a "Noah's Ark" milestone, with (at least) two available numerical solvers for each domain. MUSE can treat multi-scale and multi-physics systems in which the time- and size-scales are well separated, like simulating the evolution of planetary systems, small stellar associations, dense stellar clusters, galaxies and galactic nuclei. In this paper we describe three examples calculated using MUSE: the merger of two galaxies, the merger of two evolving stars, and a hybrid N-body simulation. In addition, we demonstrate an implementation of MUSE on a distributed computer which may also include special-purpose hardware, such as GRAPEs or GPUs, to accelerate computations. The current MUSE code base is publicly available as open source at http://muse.li, 24 pages, To appear in New Astronomy Source code available at http://muse.li
- Published
- 2009
119. A parallel gravitational N-body kernel
- Author
-
Simon Portegies Zwart, Stephen L. W. McMillan, Willem Vermin, Derek Groen, Michael S. Sipior, Alessia Gualandris, High Energy Astrophys. & Astropart. Phys (API, FNWI), and Computational Science Lab (IVI, FNWI)
- Subjects
FOS: Computer and information sciences ,Speedup ,Source code ,Computation ,media_common.quotation_subject ,FOS: Physical sciences ,Parallel computing ,Astrophysics ,7. Clean energy ,01 natural sciences ,010305 fluids & plasmas ,Kernel (linear algebra) ,0103 physical sciences ,Code (cryptography) ,Overhead (computing) ,010303 astronomy & astrophysics ,Instrumentation ,media_common ,Physics ,Astrophysics (astro-ph) ,Astronomy and Astrophysics ,Grid ,Computer Science - Distributed, Parallel, and Cluster Computing ,Space and Planetary Science ,Integrator ,Distributed, Parallel, and Cluster Computing (cs.DC) - Abstract
We describe source code level parallelization for the {\tt kira} direct gravitational $N$-body integrator, the workhorse of the {\tt starlab} production environment for simulating dense stellar systems. The parallelization strategy, called ``j-parallelization'', involves the partition of the computational domain by distributing all particles in the system among the available processors. Partial forces on the particles to be advanced are calculated in parallel by their parent processors, and are then summed in a final global operation. Once total forces are obtained, the computing elements proceed to the computation of their particle trajectories. We report the results of timing measurements on four different parallel computers, and compare them with theoretical predictions. The computers employ either a high-speed interconnect, a NUMA architecture to minimize the communication overhead or are distributed in a grid. The code scales well in the domain tested, which ranges from 1024 - 65536 stars on 1 - 128 processors, providing satisfactory speedup. Running the production environment on a grid becomes inefficient for more than 60 processors distributed across three sites., Comment: 21 pages, New Astronomy (in press)
- Published
- 2008
- Full Text
- View/download PDF
120. Distributed N-body simulation on the grid using dedicated hardware
- Author
-
Simon Portegies Zwart, Steve McMillan, Jun Makino, Derek Groen, High Energy Astrophys. & Astropart. Phys (API, FNWI), and Computational Science Lab (IVI, FNWI)
- Subjects
FOS: Computer and information sciences ,N-body simulation ,Virtual organization ,FOS: Physical sciences ,02 engineering and technology ,Network topology ,Astrophysics ,01 natural sciences ,7. Clean energy ,020204 information systems ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Latency (engineering) ,010303 astronomy & astrophysics ,Instrumentation ,Physics ,Unit of time ,business.industry ,Astrophysics (astro-ph) ,Astronomy and Astrophysics ,Grid ,Supercomputer ,Wide area ,Computer Science - Distributed, Parallel, and Cluster Computing ,Space and Planetary Science ,Distributed, Parallel, and Cluster Computing (cs.DC) ,business ,Computer hardware - Abstract
We present performance measurements of direct gravitational N -body simulation on the grid, with and without specialized (GRAPE-6) hardware. Our inter-continental virtual organization consists of three sites, one in Tokyo, one in Philadelphia and one in Amsterdam. We run simulations with up to 196608 particles for a variety of topologies. In many cases, high performance simulations over the entire planet are dominated by network bandwidth rather than latency. With this global grid of GRAPEs our calculation time remains dominated by communication over the entire range of N, which was limited due to the use of three sites. Increasing the number of particles will result in a more efficient execution. Based on these timings we construct and calibrate a model to predict the performance of our simulation on any grid infrastructure with or without GRAPE. We apply this model to predict the simulation performance on the Netherlands DAS-3 wide area computer. Equipping the DAS-3 with GRAPE-6Af hardware would achieve break-even between calculation and communication at a few million particles, resulting in a compute time of just over ten hours for 1 N -body time unit. Key words: high-performance computing, grid, N-body simulation, performance modelling, (in press) New Astronomy, 24 pages, 5 figures
- Published
- 2008
121. Anatomy and Physiology of Multiscale Modeling and Simulation in Systems Medicine
- Author
-
Alexandru, Mizeranschi, Derek, Groen, Joris, Borgdorff, Alfons G, Hoekstra, Bastien, Chopard, and Werner, Dubitzky
- Subjects
Information Management ,Physiology ,Systems Biology ,Database Management Systems ,Humans ,Medicine ,Computer Simulation ,Anatomy ,Models, Biological - Abstract
Systems medicine is the application of systems biology concepts, methods, and tools to medical research and practice. It aims to integrate data and knowledge from different disciplines into biomedical models and simulations for the understanding, prevention, cure, and management of complex diseases. Complex diseases arise from the interactions among disease-influencing factors across multiple levels of biological organization from the environment to molecules. To tackle the enormous challenges posed by complex diseases, we need a modeling and simulation framework capable of capturing and integrating information originating from multiple spatiotemporal and organizational scales. Multiscale modeling and simulation in systems medicine is an emerging methodology and discipline that has already demonstrated its potential in becoming this framework. The aim of this chapter is to present some of the main concepts, requirements, and challenges of multiscale modeling and simulation in systems medicine.
- Published
- 2015
122. Mechanism of Exfoliation and Prediction of Materials Properties of Clay-Polymer Nanocomposites from Multiscale Modeling
- Author
-
James L, Suter, Derek, Groen, and Peter V, Coveney
- Abstract
We describe the mechanism that leads to full exfoliation and dispersion of organophilic clays when mixed with molten hydrophilic polymers. This process is of fundamental importance for the production of clay-polymer nanocomposites with enhanced materials properties. The chemically specific nature of our multiscale approach allows us to probe how chemistry, in combination with processing conditions, produces such materials properties at the mesoscale and beyond. In general agreement with experimental observations, we find that a higher grafting density of charged quaternary ammonium surfactant ions promotes exfoliation, by a mechanism whereby the clay sheets slide transversally over one another. We can determine the elastic properties of these nanocomposites; exfoliated and partially exfoliated morphologies lead to substantial enhancement of the Young's modulus, as found experimentally.
- Published
- 2015
123. Science hackathons for developing interdisciplinary research and collaborations
- Author
-
Ben Calderhead and Derek Groen
- Subjects
Life Sciences & Biomedicine - Other Topics ,hackathons ,early career researcher ,Biomedical Research ,QH301-705.5 ,Science ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,General Biochemistry, Genetics and Molecular Biology ,03 medical and health sciences ,0302 clinical medicine ,early career researchers ,030212 general & internal medicine ,Cooperative Behavior ,Biology (General) ,Biology ,030304 developmental biology ,0303 health sciences ,Science & Technology ,General Immunology and Microbiology ,General Neuroscience ,cutting edge ,Feature Article ,General Medicine ,collaboration ,Research Design ,hackathon ,interdisciplinary research ,careers in science ,Medicine ,Engineering ethics ,Cooperative behavior ,Life Sciences & Biomedicine - Abstract
Science hackathons can help academics, particularly those in the early stage of their careers, to build collaborations and write research proposals.
- Published
- 2015
124. Weighted decomposition in high-performance lattice-Boltzmann simulations: Are some lattice sites more equal than others?
- Author
-
Derek Groen, David Abou Chacra, Rupert W. Nash, Peter V. Coveney, Miguel O. Bernabeu, Jiri Jaros, Markidis, S, and Laure, E
- Subjects
FOS: Computer and information sciences ,G.4 ,Computer science ,Lattice Boltzmann methods ,FOS: Physical sciences ,Parallel computing ,01 natural sciences ,010305 fluids & plasmas ,Computational science ,G.1.0 ,Software ,Mesoscale and Nanoscale Physics (cond-mat.mes-hall) ,0103 physical sciences ,Domain decomposition ,010306 general physics ,I.3.1 ,Condensed Matter - Mesoscale and Nanoscale Physics ,business.industry ,I.6.3 ,Domain decomposition methods ,Supercomputer ,Flow (mathematics) ,Computer Science - Distributed, Parallel, and Cluster Computing ,I.6.8 ,68W10, 68W40, 68U20, 68N30, 65Yxx ,Distributed, Parallel, and Cluster Computing (cs.DC) ,High performance computing ,business ,Lattice-Boltzmann - Abstract
Obtaining a good load balance is a significant challenge in scaling up lattice-Boltzmann simulations of realistic sparse problems to the exascale. Here we analyze the effect of weighted decomposition on the performance of the HemeLB lattice-Boltzmann simulation environment, when applied to sparse domains. Prior to domain decomposition, we assign wall and in/outlet sites with increased weights which reflect their increased computational cost. We combine our weighted decomposition with a second optimization, which is to sort the lattice sites according to a space filling curve. We tested these strategies on a sparse bifurcation and very sparse aneurysm geometry, and find that using weights reduces calculation load imbalance by up to 85%, although the overall communication overhead is higher than some of our runs., 11 pages, 8 figures, 1 table, accepted for the EASC2014 conference
- Published
- 2015
125. Computer simulations reveal complex distribution of haemodynamic forces in a mouse retina model of angiogenesis
- Author
-
Miguel O, Bernabeu, Martin L, Jones, Jens H, Nielsen, Timm, Krüger, Rupert W, Nash, Derek, Groen, Sebastian, Schmieschek, James, Hetherington, Holger, Gerhardt, Claudio A, Franco, and Peter V, Coveney
- Subjects
retina ,Microscopy, Confocal ,Hemodynamics ,Neovascularization, Physiologic ,Models, Biological ,shear stress ,Biomechanical Phenomena ,Mice ,angiogenesis ,Image Processing, Computer-Assisted ,Animals ,blood flow ,Computer Simulation ,lattice-Boltzmann ,Research Articles ,mouse - Abstract
There is currently limited understanding of the role played by haemodynamic forces on the processes governing vascular development. One of many obstacles to be overcome is being able to measure those forces, at the required resolution level, on vessels only a few micrometres thick. In this paper, we present an in silico method for the computation of the haemodynamic forces experienced by murine retinal vasculature (a widely used vascular development animal model) beyond what is measurable experimentally. Our results show that it is possible to reconstruct high-resolution three-dimensional geometrical models directly from samples of retinal vasculature and that the lattice-Boltzmann algorithm can be used to obtain accurate estimates of the haemodynamics in these domains. We generate flow models from samples obtained at postnatal days (P) 5 and 6. Our simulations show important differences between the flow patterns recovered in both cases, including observations of regression occurring in areas where wall shear stress (WSS) gradients exist. We propose two possible mechanisms to account for the observed increase in velocity and WSS between P5 and P6: (i) the measured reduction in typical vessel diameter between both time points and (ii) the reduction in network density triggered by the pruning process. The methodology developed herein is applicable to other biomedical domains where microvasculature can be imaged but experimental flow measurements are unavailable or difficult to obtain.
- Published
- 2014
126. Chemically specific multiscale modeling of clay-polymer nanocomposites reveals intercalation dynamics, tactoid self-assembly and emergent materials properties
- Author
-
James L. Suter, Derek Groen, and Peter V. Coveney
- Subjects
Clay–polymer nanocomposites ,Materials science ,Mechanical Engineering ,clay–polymer nanocomposites ,Library science ,tactic self-assembly ,Nanotechnology ,Feature Articles ,multiscale modeling ,Polymerentangled tactoids ,Mechanics of Materials ,materials properties ,General Materials Science ,Polymer intercalation ,intercalation dynamics - Abstract
A quantitative description is presented of the dynamical process of polymer intercalation into clay tactoids and the ensuing aggregation of polymerentangled tactoids into larger structures, obtaining various characteristics of these nanocomposites, including clay-layer spacings, out-of-plane clay-sheet bending energies, X-ray diffractograms, and materials properties. This model of clay-polymer interactions is based on a three-level approach, which uses quantum mechanical and atomistic descriptions to derive a coarse-grained yet chemically specifi c representation that can resolve processes on hitherto inaccessible length and time scales. The approach is applied to study collections of clay mineral tactoids interacting with two synthetic polymers, poly(ethylene glycol) and poly(vinyl alcohol). The controlled behavior of layered materials in a polymer matrix is centrally important for many engineering and manufacturing applications. This approach opens up a route to computing the properties of complex soft materials based on knowledge of their chemical composition, molecular structure, and processing conditions. This work was funded in part by the EU FP7 MAPPER project (grant number RI-261507) and the Qatar National Research Fund (grant number 09–260–1–048). Supercomputing time was provided by PRACE on JUGENE (project PRA044), the Hartree Centre (Daresbury Laboratory) on BlueJoule and BlueWonder via the CGCLAY project, and on HECToR and ARCHER, the UK national supercomputing facility at the University of Edinburgh, via EPSRC through grants EP/F00521/1, EP/E045111/1, EP/I017763/1 and the UK Consortium on Mesoscopic Engineering Sciences (EP/L00030X/1). The authors are grateful to Professor Julian Evans for stimulating discussions during the course of this project. Data-storage and management services were provided by EUDAT (grant number 283304).
- Published
- 2014
127. Towards a computational system to support clinical treatment decisions for diagnosed cerebral aneurysms
- Author
-
Nikhil V. Navkar, Miguel O. Bernabeu, H. Kamel, Abdulla Al-Ansari, M. A. R. Saghir, Sarada Prasad Dakua, Peter V. Coveney, Derek Groen, and Julien Abinahed
- Subjects
medicine.medical_specialty ,Subarachnoid hemorrhage ,business.industry ,Adult population ,Hemodynamics ,medicine.disease ,Workflow model ,Aneurysm ,cardiovascular system ,medicine ,Clinical value ,Patient treatment ,cardiovascular diseases ,Radiology ,business ,Clinical treatment - Abstract
Cerebral aneurysms are one of the prevalent and devastating cerebrovascular diseases of adult population worldwide. In most cases, it causes rupturing of cerebral vessels inside the brain, which leads to subarachnoid hemorrhage. In this work, we present a clinical workflow model to assist endovascular interventionists to select the type of stent-related treatment for cerebral aneurysms. The model pipeline includes data-acquisition, processing of aneurysm geometry and the simulation of blood flow. The preliminary results show the potential clinical value of the proposed computational workflow.
- Published
- 2014
- Full Text
- View/download PDF
128. Performance of distributed multiscale simulations
- Author
-
Borgdorff, J., Ben Belgacem, M., Bona-Casas, C., Fazendeiro, L., Derek Groen, Hoenen, O., Mizeranschi, A., Suter, J. L., Coster, D., Coveney, P. V., Dubitzky, W., Hoekstra, A. G., Strand, P., Chopard, B., and Computational Science Lab (IVI, FNWI)
- Subjects
Systems Integration ,distributed multiscale computing ,Software Design ,Astronomy, Astrophysics and Cosmology ,Computer Simulation ,Articles ,Models, Biological ,performance ,multiscale simulation ,Algorithms ,Software ,Research Article - Abstract
Multiscale simulations model phenomena across natural scales using monolithic or component-based code, running on local or distributed resources. In this work, we investigate the performance of distributed multiscale computing of component-based models, guided by six multiscale applications with different characteristics and from several disciplines. Three modes of distributed multiscale computing are identified: supplementing local dependencies with large-scale resources, load distribution over multiple resources, and load balancing of small- and large-scale resources. We find that the first mode has the apparent benefit of increasing simulation speed, and the second mode can increase simulation speed if local resources are limited. Depending on resource reservation and model coupling topology, the third mode may result in a reduction of resource consumption. © 2014 The Authors.
- Published
- 2014
129. The Cosmogrid simulation: Statistical properties of small dark matter halos
- Author
-
Keigo Nitadori, Derek Groen, Cees de Laat, Tomoaki Ishiyama, Simon Portegies Zwart, Stefan Harfst, Junichiro Makino, Steven Rieder, Stephen L. W. McMillan, Kei Hiraki, Computational Science Lab (IVI, FNWI), System and Network Engineering (IVI, FNWI), IvI Research (FNWI), and Theory of Computer Science (IVI, FNWI)
- Subjects
Physics ,Cosmology and Nongalactic Astrophysics (astro-ph.CO) ,010308 nuclear & particles physics ,Halo mass function ,Dark matter ,Concentration parameter ,FOS: Physical sciences ,Astronomy and Astrophysics ,Astrophysics ,Radius ,Astrophysics::Cosmology and Extragalactic Astrophysics ,01 natural sciences ,Power law ,Space and Planetary Science ,Galaxy group ,0103 physical sciences ,Probability distribution ,Halo ,010303 astronomy & astrophysics ,Astrophysics::Galaxy Astrophysics ,Astrophysics - Cosmology and Nongalactic Astrophysics - Abstract
We present the results of the "Cosmogrid" cosmological N-body simulation suites based on the concordance LCDM model. The Cosmogrid simulation was performed in a 30Mpc box with 2048^3 particles. The mass of each particle is 1.28x10^5 Msun, which is sufficient to resolve ultra-faint dwarfs. We found that the halo mass function shows good agreement with the Sheth & Tormen fitting function down to ~10^7 Msun. We have analyzed the spherically averaged density profiles of the three most massive halos which are of galaxy group size and contain at least 170 million particles. The slopes of these density profiles become shallower than -1 at the inner most radius. We also find a clear correlation of halo concentration with mass. The mass dependence of the concentration parameter cannot be expressed by a single power law, however a simple model based on the Press-Schechter theory proposed by Navarro et al. gives reasonable agreement with this dependence. The spin parameter does not show a correlation with the halo mass. The probability distribution functions for both concentration and spin are well fitted by the log-normal distribution for halos with the masses larger than ~10^8 Msun. The subhalo abundance depends on the halo mass. Galaxy-sized halos have 50% more subhalos than ~10^{11} Msun halos have., 15 pages, 18 figures, accepted by ApJ
- Published
- 2013
130. Multiscale Computing with the Multiscale Modeling Library and Runtime Environment
- Author
-
Mohamed Ben Belgacem, Bartosz Bosak, Joris Borgdorff, Derek Groen, Krzysztof Kurowski, Mariusz Mamonski, Alfons G. Hoekstra, and Computational Science Lab (IVI, FNWI)
- Subjects
distributed multiscale computing ,Java ,Fortran ,Computer science ,Quantitative Biology::Tissues and Organs ,Physics::Medical Physics ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,010305 fluids & plasmas ,Computational science ,Computer Science::Robotics ,Component (UML) ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Multiscale modeling ,ddc:025.063 ,General Environmental Science ,computer.programming_language ,ComputingMethodologies_COMPUTERGRAPHICS ,Distributed multiscale computing ,MUSCLE ,multiscale modeling ,General Earth and Planetary Sciences ,020201 artificial intelligence & image processing ,Data mining ,computer - Abstract
We introduce a software tool to simulate multiscale models: the Multiscale Coupling Library and Environment 2 (MUSCLE 2). MUSCLE 2 is a component-based modeling tool inspired by the multiscale modeling and simulation framework, with an easy-to-use API which supports Java, C++, C, and Fortran. We present MUSCLE 2's runtime features, such as its distributed computing capabilities, and its benefits to multiscale modelers. We also describe two multiscale models that use MUSCLE 2 to do distributed multiscale computing: an in-stent restenosis and a canal system model. We conclude that MUSCLE 2 is a notable improvement over the previous version of MUSCLE, and that it allows users to more flexibly deploy simulations of multiscale models, while improving their performance.
- Published
- 2013
131. Impact of blood rheology on wall shear stress in a model of the middle cerebral artery
- Author
-
Miguel O. Bernabeu, Derek Groen, James Hetherington, Hywel B. Carver, Rupert W. Nash, Timm Krüger, and Peter V. Coveney
- Subjects
FOS: Computer and information sciences ,blood flow modelling ,lattice Boltzmann ,FLOW SIMULATION ,CAROTID BIFURCATION ,Biomedical Engineering ,Biophysics ,FOS: Physical sciences ,Bioengineering ,Biochemistry ,Computational Engineering, Finance, and Science (cs.CE) ,Biomaterials ,Rheology ,medicine.artery ,Newtonian fluid ,medicine ,Shear stress ,INTRACRANIAL ANEURYSM ,three-band diagram analysis ,Computer Science - Computational Engineering, Finance, and Science ,multi-scale modelling ,RISK ,Physics ,COMPUTATIONAL FLUID-DYNAMICS ,Fluid Dynamics (physics.flu-dyn) ,Mechanics ,Articles ,Physics - Fluid Dynamics ,Physics - Medical Physics ,Right middle cerebral artery ,Middle cerebral artery ,rheology ,Medical Physics (physics.med-ph) ,Vascular pathology ,VISCOSITY ,Biotechnology - Abstract
Perturbations to the homeostatic distribution of mechanical forces exerted by blood on the endothelial layer have been correlated with vascular pathologies including intracranial aneurysms and atherosclerosis. Recent computational work suggests that in order to correctly characterise such forces, the shear-thinning properties of blood must be taken into account. To the best of our knowledge, these findings have never been compared against experimentally observed pathological thresholds. In the current work, we apply the three-band diagram (TBD) analysis due to Gizzi et al. to assess the impact of the choice of blood rheology model on a computational model of the right middle cerebral artery. Our results show that, in the model under study, the differences between the wall shear stress predicted by a Newtonian model and the well known Carreau-Yasuda generalized Newtonian model are only significant if the vascular pathology under study is associated with a pathological threshold in the range 0.94 Pa to 1.56 Pa, where the results of the TBD analysis of the rheology models considered differs. Otherwise, we observe no significant differences., 14 pages, 6 figures, published at Interface Focus
- Published
- 2012
- Full Text
- View/download PDF
132. Choice of boundary condition for lattice-Boltzmann simulation of moderate-Reynolds-number flow in complex domains
- Author
-
Rupert W, Nash, Hywel B, Carver, Miguel O, Bernabeu, James, Hetherington, Derek, Groen, Timm, Krüger, and Peter V, Coveney
- Subjects
Models, Cardiovascular ,Animals ,Blood Vessels ,Humans ,Computer Simulation ,Numerical Analysis, Computer-Assisted ,Blood Viscosity ,Rheology ,Algorithms ,Blood Flow Velocity - Abstract
Modeling blood flow in larger vessels using lattice-Boltzmann methods comes with a challenging set of constraints: a complex geometry with walls and inlets and outlets at arbitrary orientations with respect to the lattice, intermediate Reynolds (Re) number, and unsteady flow. Simple bounce-back is one of the most commonly used, simplest, and most computationally efficient boundary conditions, but many others have been proposed. We implement three other methods applicable to complex geometries [Guo, Zheng, and Shi, Phys. Fluids 14, 2007 (2002); Bouzidi, Firdaouss, and Lallemand, Phys. Fluids 13, 3452 (2001); Junk and Yang, Phys. Rev. E 72, 066701 (2005)] in our open-source application hemelb. We use these to simulate Poiseuille and Womersley flows in a cylindrical pipe with an arbitrary orientation at physiologically relevant Re number (1-300) and Womersley (4-12) numbers and steady flow in a curved pipe at relevant Dean number (100-200) and compare the accuracy to analytical solutions. We find that both the Bouzidi-Firdaouss-Lallemand (BFL) and Guo-Zheng-Shi (GZS) methods give second-order convergence in space while simple bounce-back degrades to first order. The BFL method appears to perform better than GZS in unsteady flows and is significantly less computationally expensive. The Junk-Yang method shows poor stability at larger Re number and so cannot be recommended here. The choice of collision operator (lattice Bhatnagar-Gross-Krook vs multiple relaxation time) and velocity set (D3Q15 vs D3Q19 vs D3Q27) does not significantly affect the accuracy in the problems studied.
- Published
- 2012
133. Analyzing and Modeling the Performance of the HemeLB Lattice-Boltzmann Simulation Environment
- Author
-
Rupert W. Nash, Peter V. Coveney, Miguel O. Bernabeu, Hywel B. Carver, James Hetherington, and Derek Groen
- Subjects
Parallel computing ,G.4 ,General Computer Science ,Slowdown ,Lattice Boltzmann methods ,FOS: Physical sciences ,010103 numerical & computational mathematics ,01 natural sciences ,010305 fluids & plasmas ,Theoretical Computer Science ,Lattice boltzmann simulation ,G.1.0 ,Modelling and Simulation ,Lattice (order) ,0103 physical sciences ,Statistical physics ,0101 mathematics ,High-performance computing ,Performance modelling ,Physics ,I.3.1 ,I.6.3 ,Fluid Dynamics (physics.flu-dyn) ,Physics - Fluid Dynamics ,Computational Physics (physics.comp-ph) ,Supercomputer ,Visualization ,Modeling and Simulation ,I.6.8 ,68W10, 68W40, 68U20, 68N30, 65Yxx ,Physics - Computational Physics ,Computer Science(all) ,Lattice-Boltzmann - Abstract
We investigate the performance of the HemeLB lattice-Boltzmann simulator for cerebrovascular blood flow, aimed at providing timely and clinically relevant assistance to neurosurgeons. HemeLB is optimised for sparse geometries, supports interactive use, and scales well to 32,768 cores for problems with ~81 million lattice sites. We obtain a maximum performance of 29.5 billion site updates per second, with only an 11% slowdown for highly sparse problems (5% fluid fraction). We present steering and visualisation performance measurements and provide a model which allows users to predict the performance, thereby determining how to run simulations with maximum accuracy within time constraints., Accepted by the Journal of Computational Science. 33 pages, 16 figures, 7 tables
- Published
- 2012
134. Survey of Multiscale and Multiphysics Applications and Communities
- Author
-
Peter V. Coveney, Stefan J. Zasada, and Derek Groen
- Subjects
FOS: Computer and information sciences ,Physics - Physics and Society ,J.2 ,Theoretical computer science ,J.3 ,General Computer Science ,ComputerSystemsOrganization_COMPUTERSYSTEMIMPLEMENTATION ,Computer science ,Multiphysics ,Other Computer Science (cs.OH) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,FOS: Physical sciences ,Physics and Society (physics.soc-ph) ,I.6 ,Coupling toolkits ,D.2.11 ,D.2.12 ,Multiscale computing ,Multiscale communities ,Computer Science - Other Computer Science ,Scientific computing ,Multiscale software ,Scale (chemistry) ,I.6.3 ,General Engineering ,I.6.5 ,Data science ,D.0 ,I.6.8 ,Construct (philosophy) ,Application review - Abstract
Multiscale and multiphysics applications are now commonplace, and many researchers focus on combining existing models to construct combined multiscale models. Here we present a concise review of multiscale applications and their source communities. We investigate the prevalence of multiscale projects in the EU and the US, review a range of coupling toolkits they use to construct multiscale models and identify areas where collaboration between disciplines could be particularly beneficial. We conclude that multiscale computing has become increasingly popular in recent years, that different communities adopt very different approaches to constructing multiscale simulations, and that simulations on a length scale of a few metres and a time scale of a few hours can be found in many of the multiscale research domains. Communities may receive additional benefit from sharing methods that are geared towards these scales., 12 pages, 6 figures, 2 tables. Accepted by CiSE (with a constrained number of references; these were put in a separate literature list)
- Published
- 2012
135. Distributed Multiscale Simulations of Clay-Polymer Nanocomposites
- Author
-
James L. Suter, Derek Groen, Peter V. Coveney, and Lara Kabalan
- Subjects
chemistry.chemical_classification ,Length scale ,Molecular dynamics ,Mesoscopic physics ,Materials science ,Nanocomposite ,Polymer nanocomposite ,chemistry ,Nanoparticle ,Polymer ,Dispersion (chemistry) ,Biological system - Abstract
The mechanical enhancement of polymers when clay nanoparticles are dispersed within it depends on factors over various length scales; for example, the orientation of the clay platelets in the polymer matrix will affect the mechanical resistance of the composite, while at the shortest scale the molecular arrangement and the adhesion energy of the polymer molecules in the galleries and the vicinity of the clay-polymer interface will also affect the overall mechanical properties.In this paper, we address the challenge of creating a hierarchal multiscale modelling scheme to traverse a sufficiently wide range of time and length scales to simulate clay-polymer nanocomposites effectively. This scheme varies from the electronic structure (to capture the polymer – clay interactions, especially those of the reactive clay edges) through classical atomistic molecular dynamics to coarse-grained models (to capture the long length scale structure).Such a scenario is well suited to distributed computing with each level of the scheme allocated to a suitable computational resource. We describe how the e-infrastructure and tools developed by the MAPPER (Multiscale Applications on European e-Infrastructures) project facilitates our multiscale scheme. Using this new technology, we have simulated clay-polymer systems containing up to several million atoms/particles. This system size is firmly within the mesoscopic regime, containing several clay platelets with the edges of the platelets explicitly resolved. We show preliminary results of a “bottom-up” multiscale simulation of a clay platelet dispersion in poly(ethylene) glycol.
- Published
- 2012
- Full Text
- View/download PDF
136. Modelling Distributed Multiscale Simulation Performance: An Application to Nanocomposites
- Author
-
James L. Suter, Peter V. Coveney, and Derek Groen
- Subjects
Coupling ,Polymer nanocomposite ,Scale (ratio) ,Computer science ,Ranging ,01 natural sciences ,010305 fluids & plasmas ,Computational science ,Molecular dynamics ,0103 physical sciences ,Range (statistics) ,Particle ,010306 general physics ,Quantum - Abstract
Clay polymer nanocomposites are a new range of particle filled composite material which interact over many different length scales, ranging from the quantum mechanical level to macroscopic. Multiscale simulation is therefore an important technique to understand and, ultimately, predict the properties of the composites from their individual components. We describe two multiscale simulation scenarios in which we couple simulations running on different levels of scale: in the loosely-coupled scheme we have a unidirectional coupling of one level to the next level, while in the tightly-coupled scheme we have simulations creating multiple inputs and parameters for simulations at different levels, running concurrently. We present a performance model that predicts the multiscale efficiency of our multiscale application. Here the multiscale efficiency constitutes the fraction of runtime spent on executing the simulation codes, and not on operations facilitating the coupling between the simulations. We find that the efficiency is high (greater than 90 %) until the number of sub-simulations exceeds a critical number (> 10 in our examples).
- Published
- 2011
- Full Text
- View/download PDF
137. Taxonomy of Multiscale Computing Communities
- Author
-
Peter V. Coveney, Derek Groen, and Stefan J. Zasada
- Subjects
Collaborative software ,Management science ,business.industry ,Computer science ,Taxonomy (general) ,Scale (chemistry) ,business ,Multiscale modeling - Abstract
We present a concise and comprehensive review of research communities which perform multiscale computing. We provide an overview of communities in a range of domains, and compare these communities to assess the level of use of multiscale methods in different research domains. Additionally, we characterize several areas where inter-disciplinary multiscale collaboration or the introduction of common and reusable methods could be particularly beneficial. We conclude that multiscale computing has become increasingly popular in recent years, that different communities adopt radically different organizational approaches, and that simulations on a length scale of a few metres and a time scale of a few hours can be found in many of the multiscale research domains. Sharing multiscale methods specifically geared towards these scales between communities may therefore be particularly beneficial.
- Published
- 2011
- Full Text
- View/download PDF
138. Developing an infrastructure to support multiscale modelling and simulation
- Author
-
Derek Groen, Peter V. Coveney, and Stefan J. Zasada
- Subjects
Software ,Multidisciplinary approach ,Management science ,business.industry ,Computer science ,Scale (chemistry) ,Extreme scale computing ,Systems engineering ,business ,Multiscale modeling ,Scientific disciplines - Abstract
Today scientists and engineers are commonly faced with the challenge of modelling, predicting and controlling multiscale systems which cross scientific disciplines and where several processes acting at different scales coexist and interact. Such multidisciplinary multiscale models, when simulated in three dimensions, require large scale or even extreme scale computing capabilities. The MAPPER project is developing computational strategies, software and services for distributed multiscale simulations across disciplines, exploiting existing and evolving e-infrastructure. To facilitate such an infrastructure, the MAPPER project is developing and deploying a multi-tiered software stack, which we will describe in this talk.
- Published
- 2011
- Full Text
- View/download PDF
139. A lightweight communication library for distributed computing
- Author
-
Paola Grosso, Derek Groen, Simon Portegies Zwart, Cees de Laat, Steven Rieder, and System and Network Engineering (IVI, FNWI)
- Subjects
FOS: Computer and information sciences ,020203 distributed computing ,Numerical Analysis ,Computer science ,Scale (chemistry) ,Message passing ,General Physics and Astronomy ,02 engineering and technology ,computer.software_genre ,68M14 (primary), 68M20, 85-08, 85A40 (secondary) ,Computational Mathematics ,Coupling (computer programming) ,Computer Science - Distributed, Parallel, and Cluster Computing ,Wide area network ,C.2.5 ,0202 electrical engineering, electronic engineering, information engineering ,Operating system ,C.2.4 ,020201 artificial intelligence & image processing ,Compiler ,Distributed, Parallel, and Cluster Computing (cs.DC) ,User interface ,computer - Abstract
We present MPWide, a platform independent communication library for performing message passing between computers. Our library allows coupling of several local MPI applications through a long distance network and is specifically optimized for such communications. The implementation is deliberately kept light-weight, platform independent and the library can be installed and used without administrative privileges. The only requirements are a C++ compiler and at least one open port to a wide area network on each site. In this paper we present the library, describe the user interface, present performance tests and apply MPWide in a large scale cosmological N-body simulation on a network of two computers, one in Amsterdam and the other in Tokyo., 17 pages, 10 figures, published in Computational Science & Discovery
- Published
- 2010
- Full Text
- View/download PDF
140. Running parallel applications with topology-aware grid middleware
- Author
-
Valentin Kravtsov, Assaf Schuster, Martin T. Swain, Camille Coti, Derek Groen, Thomas Herault, Pavel Bar, Department of Computer Science [Haifa], University of Haifa [Haifa], Global parallel and distributed computing (GRAND-LARGE), Centre National de la Recherche Scientifique (CNRS)-Inria Saclay - Ile de France, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université Paris-Sud - Paris 11 (UP11)-Laboratoire d'Informatique Fondamentale de Lille (LIFL), Université de Lille, Sciences et Technologies-Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lille, Sciences Humaines et Sociales-Centre National de la Recherche Scientifique (CNRS)-Université de Lille, Sciences et Technologies-Université de Lille, Sciences Humaines et Sociales-Centre National de la Recherche Scientifique (CNRS)-Laboratoire de Recherche en Informatique (LRI), Université Paris-Sud - Paris 11 (UP11)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-CentraleSupélec, Section Computational Science, University of Amsterdam [Amsterdam] (UvA), University of Ulster, Grid'5000, Laboratoire de Recherche en Informatique (LRI), Université Paris-Sud - Paris 11 (UP11)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-Université Paris-Sud - Paris 11 (UP11)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-Laboratoire d'Informatique Fondamentale de Lille (LIFL), Université de Lille, Sciences et Technologies-Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lille, Sciences Humaines et Sociales-Centre National de la Recherche Scientifique (CNRS)-Université de Lille, Sciences et Technologies-Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lille, Sciences Humaines et Sociales-Centre National de la Recherche Scientifique (CNRS)-Inria Saclay - Ile de France, Institut National de Recherche en Informatique et en Automatique (Inria), and Computational Science Lab (IVI, FNWI)
- Subjects
Computational model ,Exploit ,Computer science ,Distributed computing ,Evolutionary algorithm ,02 engineering and technology ,Grid ,Network topology ,computer.software_genre ,01 natural sciences ,Evolutionary computation ,Scheduling (computing) ,Grid computing ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,[INFO.INFO-DC]Computer Science [cs]/Distributed, Parallel, and Cluster Computing [cs.DC] ,010303 astronomy & astrophysics ,computer - Abstract
International audience; Index Terms IEEE Terms Biological system modeling , Cities and towns , Collaboration , Computational modeling , Concurrent computing , Evolutionary computation , Genetics , Grid computing , Middleware , Topology INSPEC Controlled Indexing evolutionary computation , grid computing , middleware , parallel programming , topology Non Controlled Indexing Grid'5000 , complex systems , evolutionary algorithm , geographically distributed clusters , modeling-implementing-executing cycle , multi-body simulation , parallel applications , parallelized computational models , reverse-engineering gene regulatory networks , stellar evolution , topology-aware grid middleware , topology-aware simulations Author Keywords Grid , MPI , QCG-OMPI , QosCosGrid , topology-aware Additional Details References (11) Citing Documents (1) On page(s): 292 Conference Location : Oxford Print ISBN: 978-0-7695-3877-8 INSPEC Accession Number: 11101974 Digital Object Identifier : 10.1109/e-Science.2009.48 Date of Current Version : 15 janvier 2010 Issue Date : 9-11 Dec. 2009 Related Content The cycle server: a Web platform for running parallel Monte Carlo applications on a heterogeneous Condor pool of workstations Performance Measurement and Analysis of High-Performance Parallel Applications over Lambda Grid Performance models for dynamic tuning of parallel applications on Computational Grids Co-Ordination of Parallel GRID Applications using Synchronizers Gridhra A Web Launched Parallel Application Debugger for Grids with Heterogeneous Resources
- Published
- 2009
- Full Text
- View/download PDF
141. Ten Simple Rules for a Successful Cross-Disciplinary Collaboration
- Author
-
Timo Minssen, Joanna Lewis, James M. Osborne, Maria Bruna, Greg J. McInerny, Alexander G. Fletcher, Jonathan Cooper, Miguel O. Bernabeu, Christian A. Yates, Bernhard Knapp, Ben Calderhead, Joe Pitt-Francis, Rémi Bardenet, Bram Kuijper, Verena Paulitschke, David J. Gavaghan, Rafel Bordas, Charlotte M. Deane, Derek Groen, Jelena Todoric, Computing Science Laboratory - Oxford University, University of Oxford [Oxford], Laboratoire de Chimie des Biomolécules et de l'Environnement (LCBE), Université de Perpignan Via Domitia (UPVD)-Université Montpellier 1 (UM1), London School of Hygiene and Tropical Medicine (LSHTM), Section Computational Science, University of Amsterdam [Amsterdam] (UvA), Maasstad Hospital, University of East London (UEL), Departement of Computer of Science University of Oxford, Vascular Laboratory, and Kings College Hospital
- Subjects
Value (ethics) ,Biochemistry & Molecular Biology ,Vocabulary ,QH301-705.5 ,Bioinformatics ,Computer science ,Science ,media_common.quotation_subject ,Interdisciplinary Studies ,Bibliometrics ,Biochemical Research Methods ,Field (computer science) ,Cellular and Molecular Neuroscience ,[MATH.MATH-ST]Mathematics [math]/Statistics [math.ST] ,Order (exchange) ,Genetics ,Organizational Objectives ,Biology (General) ,Cooperative Behavior ,Molecular Biology ,ComputingMilieux_MISCELLANEOUS ,01 Mathematical Sciences ,Ecology, Evolution, Behavior and Systematics ,media_common ,08 Information And Computing Sciences ,Enthusiasm ,Science & Technology ,Ecology ,Management science ,Information sharing ,06 Biological Sciences ,Data science ,Leadership ,Editorial ,Computational Theory and Mathematics ,Models, Organizational ,Modeling and Simulation ,Interdisciplinary Communication ,Mathematical & Computational Biology ,Life Sciences & Biomedicine ,Discipline ,Algorithms - Abstract
Cross-disciplinary collaborations have become an increasingly important part of science. They are seen as key if we are to find solutions to pressing, global-scale societal challenges, including green technologies, sustainable food production, and drug development. Regulators and policy-makers have realized the power of such collaborations, for example, in the 80 billion Euro "Horizon 2020" EU Framework Programme for Research and Innovation. This programme puts special emphasis on “breaking down barriers to create a genuine single market for knowledge, research and innovation” (http://ec.europa.eu/programmes/horizon2020/en/what-horizon-2020). Cross-disciplinary collaborations are key to all partners in computational biology. On the one hand, for scientists working in theoretical fields such as computer science, mathematics, or statistics, validation of predictions against experimental data is of the utmost importance. On the other hand, experimentalists, such as molecular biologists, geneticists, or clinicians, often want to reduce the number of experiments needed to achieve a certain scientific aim, to obtain insight into processes that are inaccessible using current experimental techniques, or to handle large volumes of data, which are far beyond any human analysis skills. The synergistic and skilfulcombining ofdifferent disciplines can achieve insight beyond current borders and thereby generate novel solutions to complex problems. The combination of methods and data from different fields can achieve more than the sum of the individual parts could do alone. This applies not only to computational biology but also tomany other academic disciplines. Initiating and successfully maintaining cross-disciplinary collaborations can be challenging but highly rewarding. In a previous publication in this series, ten simple rules for a successful collaboration were proposed [1]. In the present guide, we go one step further and focus on the specific challenges associated with cross-disciplinary research, from the perspective of the theoretician in particular. As research fellows of the 2020 Science project (http://www.2020science.net) and collaboration partners, we bring broad experience of developing interdisciplinary collaborations. We intend this guide to be for early career computational researchers as well as more senior scientists who are entering a cross-disciplinary setting for the first time. We describe the key benefits, as well as some possible pitfalls, arising from collaborations between scientists with very different backgrounds. Rule 1: Enjoy Entering a Completely New Field of Research Collaborating with scientists from other disciplines is an opportunity to learn about cutting-edge science directly from experts. Make the most of being the novice. No one expects you to know everything about the new field. In particular, there is no pressure to understand everything immediately, so ask the “stupid” questions. Demonstrating your interest and enthusiasm is of much higher value than pretending to know everything already. An interested audience makes information sharing much easier for all partners in a collaboration. You should prepare for a deluge of new ideas and approaches. It is a good practice to read relevant textbooks and review papers, which your collaborators should be able to recommend, in order to quickly grasp the vocabulary (see Rule 3) and key ideas of the new field. This will make it easier for you to establish a common parlance between you and your collaborators, and allow you to build from there. You should try to discuss your work with a range of scientists from complementary fields. As well as getting feedback, this can help you identify new collaborative opportunities. Remember that contacts that do not lead directly to collaborations can still prove useful later in your career.
- Published
- 2015
- Full Text
- View/download PDF
142. Clay-Polymer Nanocomposites: Chemically Specific Multiscale Modeling of Clay-Polymer Nanocomposites Reveals Intercalation Dynamics, Tactoid Self-Assembly and Emergent Materials Properties (Adv. Mater. 6/2015)
- Author
-
Derek Groen, Peter V. Coveney, and James L. Suter
- Subjects
chemistry.chemical_classification ,Materials science ,Polymer nanocomposite ,Mechanical Engineering ,Intercalation (chemistry) ,Nanotechnology ,Electronic structure ,Polymer ,Multiscale modeling ,Chemical engineering ,chemistry ,Mechanics of Materials ,Molecule ,General Materials Science ,Self-assembly - Abstract
On page 966, P. V. Coveney and co-workers show the dynamic mechanism of the intercalation of polymer molecules into the galleries of a layered clay material, simulated through a chemically specific multiscale modelling scheme. This commences at the quantum mechanical level, shown by the electronic structure of the clay edge, and systematically transfers information through an atomistic representation to a coarse-grained description of the polymer (orange) and clay (cyan).
- Published
- 2015
- Full Text
- View/download PDF
143. On-line Application Performance Monitoring of Blood Flow Simulation in Computational Grid Architectures
- Author
-
Derek Groen, Peter M. A. Sloot, Alfredo Tirado-Ramos, Computational Science Lab (IVI, FNWI), School of Computer Engineering, and IEEE Symposium on Computer-Based Medical Systems (18th : 2005 : Dublin, Ireland)
- Subjects
Engineering::Computer science and engineering [DRNTU] ,Grid network ,Computer science ,Distributed computing ,Lattice Boltzmann methods ,Computational resource ,Grid ,computer.software_genre ,Computational science ,Resource (project management) ,Grid computing ,Information system ,Concurrent computing ,computer - Abstract
We report on our findings after running a number of on-line performance monitoring experiments with a biomedical parallel application to investigate levels of performance at hardware resources distributed across a computational Grid network. We use on-line application monitoring for improved computational resource selection and application optimization. We used a number of user-defined performance metrics within the European CrossGrid Project's G-PM tool together with a blood flow simulation application based on the lattice Boltzmann method for fluid dynamics. We found that the performance results observed during our on-line experiments give us a more accurate view of computational resource status than the regular resource information provided by standard information services to resource brokers, and that on-line monitoring has good potential for optimizing our biomedical application for more efficient runs.
- Published
- 2005
144. Mechanism of Exfoliation and Prediction of Materials Properties of Clay–Polymer Nanocomposites from Multiscale Modeling
- Author
-
James L. Suter, Derek Groen, and Peter V. Coveney
- Full Text
- View/download PDF
145. Hybrid Simulation Development – Is It Just Analytics?
- Author
-
Derek Groen, Steffem Strassburger, Jonathan Ozik, Navonil Mustafee, and David Bell
- Subjects
0303 health sciences ,Computer science ,business.industry ,Distributed computing ,Scale (chemistry) ,01 natural sciences ,010104 statistics & probability ,03 medical and health sciences ,Range (mathematics) ,Software ,Coupling (computer programming) ,Analytics ,Component-based software engineering ,Data analysis ,0101 mathematics ,business ,Representation (mathematics) ,030304 developmental biology - Abstract
Hybrid simulations can take many forms, often connecting a diverse range of hardware and software components with heterogeneous data sets. The scale of examples is also diverse with both the high-performance computing community using high-performance data analytics (HPDA) to the synthesis of software libraries or packages on a single machine. Hybrid simulation configuration and output analysis is often akin to analytics with a range of dashboards, machine learning, data aggregations and graphical representation. Underpinning the visual elements are hardware, software and data architectures that execute hybrid simulation code. These are wide ranging with few generalized blueprints, methods or patterns of development. This panel will discuss a range of hybrid simulation development approaches and endeavor to uncover possible strategies for supporting the development and coupling of hybrid simulations. U.S. Department of Energy, Office of Science, under contract number DE-AC02-06CH11357.
- Full Text
- View/download PDF
146. Towards Accurate Simulation of Global Challenges on Data Centers Infrastructures via Coupling of Models and Data Sources
- Author
-
F. Javier Nieto de Santos, Bernhard C. Geiger, Dennis Hoppe, Robert Elsässer, Sergiy Gogolenko, John Hanley, Derek Groen, M. Lawenda, Imran Mahmood, Mark Kröll, Milana Vuckovic, and Diana Suleimenova
- Subjects
Computer science ,workflow ,Data management ,Distributed computing ,Global challenges ,global systems science ,Cloud Data-as-a-Service ,02 engineering and technology ,Bridge (nautical) ,Article ,Domain (software engineering) ,Workflow ,Coupling ,Multiscale modelling ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,global challenges ,coupling ,business.industry ,Global systems science ,020206 networking & telecommunications ,Supercomputer ,streaming data ,Coupling (computer programming) ,Streaming data ,HPC ,cloud data-as-a-service ,multiscale modelling ,data management ,business ,HPDA - Abstract
Accurate digital twinning of the global challenges (GC) leads to computationally expensive coupled simulations. These simulations bring together not only different models, but also various sources of massive static and streaming data sets. In this paper, we explore ways to bridge the gap between traditional high performance computing (HPC) and data-centric computation in order to provide efficient technological solutions for accurate policy-making in the domain of GC. GC simulations in HPC environments give rise to a number of technical challenges related to coupling. Being intended to reflect current and upcoming situation for policy-making, GC simulations extensively use recent streaming data coming from external data sources, which requires changing traditional HPC systems operation. Another common challenge stems from the necessity to couple simulations and exchange data across data centers in GC scenarios. By introducing a generalized GC simulation workflow, this paper shows commonality of the technical challenges for various GC and reflects on the approaches to tackle these technical challenges in the HiDALGO project. This research has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement no. 824115 (HiDALGO).
- Full Text
- View/download PDF
147. Experience with the International Testbed in the CrossGrid Project
- Author
-
J. Salt, Brian Coghlan, David A. Cano, A. Ozieblo, Elisa Heymann, V. Lara, P. Lason, D. Rodríguez, Yiannis Cotronis, Miquel A. Senar, Rafael Marco, Marios D. Dikaiakos, Christos Kanellopoulos, Pawel Wolniewicz, A. Padee, Carlos Fernández, Krzysztof Nawrocki, Wojciech Wislicki, Marcus Hardt, Javier Sánchez, Javier Fontan, Harald Kornmayer, P. Nyczyk, Ján Astalos, I. Diaz, Jorge Gomes, Evangelos Floros, Alyssa Garcia, Sonia González, Derek Groen, Wei Xing, L. Bernardo, Farida Fassi, A. Ramos, João Martins, Mario David, Jesus Marco, Michal Bluj, and George Tsouloupas
- Subjects
Computer science ,Middleware ,Testbed ,0202 electrical engineering, electronic engineering, information engineering ,Message Passing Interface ,Operating system ,020206 networking & telecommunications ,020201 artificial intelligence & image processing ,02 engineering and technology ,computer.software_genre ,Grid ,computer - Abstract
The International Testbed of the CrossGrid Project has been in operation for the last three years, including 16 sites in 9 countries across Europe. The main achievements in installation and operation are described, and also the substantial experience gained on providing support to application and middleware developers in the project. Results are presented showing the availability of a realistic Grid framework to execute distributed interactive and parallel jobs.
148. How Policy Decisions Affect Refugee Journeys in South Sudan: A Study Using Automated Ensemble Simulations
- Author
-
Derek Groen and Diana Suleimenova
- Subjects
validation ,Horizon (archaeology) ,Refugee ,General Social Sciences ,refugee modelling ,02 engineering and technology ,Affect (psychology) ,agent-based modelling ,01 natural sciences ,010305 fluids & plasmas ,automation toolkit ,policy decisions ,Economy ,sensitivity analysis ,Policy decision ,Political science ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,media_common.cataloged_instance ,020201 artificial intelligence & image processing ,European union ,media_common - Abstract
Forced displacement has a huge impact on society today, as more than 68 million people are forcibly displaced worldwide. Existing methods for forecasting the arrival of migrants, especially refugees, may help us to better allocate humanitarian support and protection. However, few researchers have investigated the effects of policy decisions, such as border closures, on the movement of these refugees. Recently established simulation development approaches have made it possible to conduct such a study. In this paper, we use such an approach to investigate the effect of policy decisions on refugee arrivals for the South Sudan refugee crisis. To make such a study feasible in terms of human effort, we rely on agent-based modelling, and have automated several phases of simulation development using the FabFlee automation toolkit. We observe a decrease in the average relative difference from 0.615 to 0.499 as we improved the simulation model with additional information. Moreover, we conclude that the border closure and a reduction in camp capacity induce fewer refugee arrivals and more time spend travelling to other camps. While a border opening and an increase in camp capacity result in a limited increase in refugee arrivals at the destination camps. To the best of our knowledge, we are the first to conduct such an investigation for this conflict. European Union Horizon 2020 research and innovation programme, VECMA and HiDALGO projects, grant agreement nos. 800925 and 824115.
- Full Text
- View/download PDF
149. Computational Science - ICCS 2022 - 22nd International Conference, London, UK, June 21-23, 2022, Proceedings, Part III
- Author
-
Derek Groen, Clélia de Mulatier, Maciej Paszynski, Valeria V. Krzhizhanovskaya, Jack J. Dongarra, and Peter M. A. Sloot
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.