16 results on '"Castro, Pablo de Oliveira"'
Search Results
2. Enabling mixed-precision with the help of tools: A Nekbone case study
- Author
-
Chen, Yanxiang, Castro, Pablo de Oliveira, Bientinesi, Paolo, and Iakymchuk, Roman
- Subjects
Computer Science - Mathematical Software ,Computer Science - Distributed, Parallel, and Cluster Computing ,Computer Science - Software Engineering - Abstract
Mixed-precision computing has the potential to significantly reduce the cost of exascale computations, but determining when and how to implement it in programs can be challenging. In this article, we consider Nekbone, a mini-application for the CFD solver Nek5000, as a case study, and propose a methodology for enabling mixed-precision with the help of computer arithmetic tools and roofline model. We evaluate the derived mixed-precision program by combining metrics in three dimensions: accuracy, time-to-solution, and energy-to-solution. Notably, the introduction of mixed-precision in Nekbone, reducing time-to-solution by 40.7% and energy-to-solution by 47% on 128 MPI ranks.
- Published
- 2024
3. Bounds on non-linear errors for variance computation with stochastic rounding
- Author
-
Arar, El-Mehdi El, Sohier, Devan, Castro, Pablo de Oliveira, and Petit, Eric
- Subjects
Mathematics - Numerical Analysis - Abstract
The main objective of this work is to investigate non-linear errors and pairwise summation using stochastic rounding (SR) in variance computation algorithms. We estimate the forward error of computations under SR through two methods: the first is based on a bound of the variance and Bienaym{\'e}-Chebyshev inequality, while the second is based on martingales and Azuma-Hoeffding inequality. The study shows that for pairwise summation, using SR results in a probabilistic bound of the forward error proportional to log(n)u rather than the deterministic bound in O(log(n)u) when using the default rounding mode. We examine two algorithms that compute the variance, called ''textbook'' and ''two-pass'', which both exhibit non-linear errors. Using the two methods mentioned above, we show that these algorithms' forward errors have probabilistic bounds under SR in O($\sqrt$ nu) instead of nu for the deterministic bounds. We show that this advantage holds using pairwise summation for both textbook and two-pass, with probabilistic bounds of the forward error proportional to log(n)u., Comment: SIAM Journal on Scientific Computing, In press
- Published
- 2023
- Full Text
- View/download PDF
4. TREXIO: A File Format and Library for Quantum Chemistry
- Author
-
Posenitskiy, Evgeny, Chilkuri, Vijay Gopal, Ammar, Abdallah, Hapka, Michał, Pernal, Katarzyna, Shinde, Ravindra, Borda, Edgar Josué Landinez, Filippi, Claudia, Nakano, Kosuke, Kohulák, Otto, Sorella, Sandro, Castro, Pablo de Oliveira, Jalby, William, Rıós, Pablo López, Alavi, Ali, and Scemama, Anthony
- Subjects
Physics - Chemical Physics - Abstract
TREXIO is an open-source file format and library developed for the storage and manipulation of data produced by quantum chemistry calculations. It is designed with the goal of providing a reliable and efficient method of storing and exchanging wave function parameters and matrix elements, making it an important tool for researchers in the field of quantum chemistry. In this work, we present an overview of the TREXIO file format and library. The library consists of a front-end implemented in the C programming language and two different back-ends: a text back-end and a binary back-end utilizing the HDF5 library which enables fast read and write operations. It is compatible with a variety of platforms and has interfaces for the Fortran, Python, and OCaml programming languages. In addition, a suite of tools has been developed to facilitate the use of the TREXIO format and library, including converters for popular quantum chemistry codes and utilities for validating and manipulating data stored in TREXIO files. The simplicity, versatility, and ease of use of TREXIO make it a valuable resource for researchers working with quantum chemistry data., Comment: 13 pages, 2 figures
- Published
- 2023
- Full Text
- View/download PDF
5. Stochastic rounding variance and probabilistic bounds: A new approach
- Author
-
Arar, El-Mehdi El, Sohier, Devan, Castro, Pablo de Oliveira, and Petit, Eric
- Subjects
Mathematics - Numerical Analysis - Abstract
Stochastic rounding (SR) offers an alternative to the deterministic IEEE-754 floating-point rounding modes. In some applications such as PDEs, ODEs and neural networks, SR empirically improves the numerical behavior and convergence to accurate solutions while no sound theoretical background has been provided. Recent works by Ipsen, Zhou, Higham, and Mary have computed SR probabilistic error bounds for basic linear algebra kernels. For example, the inner product SR probabilistic bound of the forward error is proportional to $\sqrt$ nu instead of nu for the default rounding mode. To compute the bounds, these works show that the errors accumulated in computation form a martingale. This paper proposes an alternative framework to characterize SR errors based on the computation of the variance. We pinpoint common error patterns in numerical algorithms and propose a lemma that bounds their variance. For each probability and through Bienaym{\'e}-Chebyshev inequality, this bound leads to better probabilistic error bound in several situations. Our method has the advantage of providing a tight probabilistic bound for all algorithms fitting our model. We show how the method can be applied to give SR error bounds for the inner product and Horner polynomial evaluation.
- Published
- 2022
6. The Positive Effects of Stochastic Rounding in Numerical Algorithms
- Author
-
Arar, El-Mehdi El, Sohier, Devan, Castro, Pablo de Oliveira, and Petit, Eric
- Subjects
Mathematics - Numerical Analysis - Abstract
Recently, stochastic rounding (SR) has been implemented in specialized hardware but most current computing nodes do not yet support this rounding mode. Several works empirically illustrate the benefit of stochastic rounding in various fields such as neural networks and ordinary differential equations. For some algorithms, such as summation, inner product or matrixvector multiplication, it has been proved that SR provides probabilistic error bounds better than the traditional deterministic bounds. In this paper, we extend this theoretical ground for a wider adoption of SR in computer architecture. First, we analyze the biases of the two SR modes: SR-nearness and SR-up-or-down. We demonstrate on a case-study of Euler's forward method that IEEE-754 default rounding modes and SR-up-or-down accumulate rounding errors across iterations and that SR-nearness, being unbiased, does not. Second, we prove a O($\sqrt$ n) probabilistic bound on the forward error of Horner's polynomial evaluation method with SR, improving on the known deterministic O(n) bound.
- Published
- 2022
7. Custom-Precision Mathematical Library Explorations for Code Profiling and Optimization
- Author
-
Defour, David, Castro, Pablo de Oliveira, Istoan, Matei, and Petit, Eric
- Subjects
Computer Science - Mathematical Software - Abstract
The typical processors used for scientific computing have fixed-width data-paths. This implies that mathematical libraries were specifically developed to target each of these fixed precisions (binary16, binary32, binary64). However, to address the increasing energy consumption and throughput requirements of scientific applications, library and hardware designers are moving beyond this one-size-fits-all approach. In this article we propose to study the effects and benefits of using user-defined floating-point formats and target accuracies in calculations involving mathematical functions. Our tool collects input-data profiles and iteratively explores lower precisions for each call-site of a mathematical function in user applications. This profiling data will be a valuable asset for specializing and fine-tuning mathematical function implementations for a given application. We demonstrate the tool's capabilities on SGP4, a satellite tracking application. The profile data shows the potential for specialization and provides insight into answering where it is useful to provide variable-precision designs for elementary function evaluation.
- Published
- 2020
8. Comparing Perturbation Models for Evaluating Stability of Neuroimaging Pipelines
- Author
-
Kiar, Gregory, Castro, Pablo de Oliveira, Rioux, Pierre, Petit, Eric, Brown, Shawn T., Evans, Alan C., and Glatard, Tristan
- Subjects
Quantitative Biology - Neurons and Cognition ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
A lack of software reproducibility has become increasingly apparent in the last several years, calling into question the validity of scientific findings affected by published tools. Reproducibility issues may have numerous sources of error, including the underlying numerical stability of algorithms and implementations employed. Various forms of instability have been observed in neuroimaging, including across operating system versions, minor noise injections, and implementation of theoretically equivalent algorithms. In this paper we explore the effect of various perturbation methods on a typical neuroimaging pipeline through the use of i) targeted noise injections, ii) Monte Carlo Arithmetic, and iii) varying operating systems to identify the quality and severity of their impact. The work presented here demonstrates that even low order computational models such as the connectome estimation pipeline that we used are susceptible to noise. This suggests that stability is a relevant axis upon which tools should be compared, developed, or improved, alongside more commonly considered axes such as accuracy/biological feasibility or performance. The heterogeneity observed across participants clearly illustrates that stability is a property of not just the data or tools independently, but their interaction. Characterization of stability should therefore be evaluated for specific analyses and performed on a representative set of subjects for consideration in subsequent statistical testing. Additionally, identifying how this relationship scales to higher-order models is an exciting next step which will be explored. Finally, the joint application of perturbation methods with post-processing approaches such as bagging or signal normalization may lead to the development of more numerically stable analyses while maintaining sensitivity to meaningful variation., Comment: 9 pages, 5 figures, 1 table, paper published at IJHPCA
- Published
- 2019
9. Confidence Intervals for Stochastic Arithmetic
- Author
-
Sohier, Devan, Castro, Pablo de Oliveira, Févotte, François, Lathuilière, Bruno, Petit, Eric, and Jamond, Olivier
- Subjects
Mathematics - Numerical Analysis ,Statistics - Applications ,Statistics - Computation - Abstract
Quantifying errors and losses due to the use of Floating-Point (FP) calculations in industrial scientific computing codes is an important part of the Verification, Validation and Uncertainty Quantification (VVUQ) process. Stochastic Arithmetic is one way to model and estimate FP losses of accuracy, which scales well to large, industrial codes. It exists in different flavors, such as CESTAC or MCA, implemented in various tools such as CADNA, Verificarlo or Verrou. These methodologies and tools are based on the idea that FP losses of accuracy can be modeled via randomness. Therefore, they share the same need to perform a statistical analysis of programs results in order to estimate the significance of the results. In this paper, we propose a framework to perform a solid statistical analysis of Stochastic Arithmetic. This framework unifies all existing definitions of the number of significant digits (CESTAC and MCA), and also proposes a new quantity of interest: the number of digits contributing to the accuracy of the results. Sound confidence intervals are provided for all estimators, both in the case of normally distributed results, and in the general case. The use of this framework is demonstrated by two case studies of large, industrial codes: Europlexus and code_aster.
- Published
- 2018
10. Verificarlo: checking floating point accuracy through Monte Carlo Arithmetic
- Author
-
Denis, Christophe, Castro, Pablo De Oliveira, and Petit, Eric
- Subjects
Computer Science - Mathematical Software ,Computer Science - Numerical Analysis - Abstract
Numerical accuracy of floating point computation is a well studied topic which has not made its way to the end-user in scientific computing. Yet, it has become a critical issue with the recent requirements for code modernization to harness new highly parallel hardware and perform higher resolution computation. To democratize numerical accuracy analysis, it is important to propose tools and methodologies to study large use cases in a reliable and automatic way. In this paper, we propose verificarlo, an extension to the LLVM compiler to automatically use Monte Carlo Arithmetic in a transparent way for the end-user. It supports all the major languages including C, C++, and Fortran. Unlike source-to-source approaches, our implementation captures the influence of compiler optimizations on the numerical accuracy. We illustrate how Monte Carlo Arithmetic using the verificarlo tool outperforms the existing approaches on various use cases and is a step toward automatic numerical analysis.
- Published
- 2015
11. Confidence Intervals for Stochastic Arithmetic
- Author
-
Sohier, Devan, primary, Castro, Pablo De Oliveira, additional, Févotte, François, additional, Lathuilière, Bruno, additional, Petit, Eric, additional, and Jamond, Olivier, additional
- Published
- 2021
- Full Text
- View/download PDF
12. Numerical Uncertainty in Analytical Pipelines Lead to Impactful Variability in Brain Networks
- Author
-
Kiar, Gregory, primary, Chatelain, Yohan, additional, Castro Pablo de, Oliveira, additional, Petit, Eric, additional, Rokem, Ariel, additional, Varoquaux, Gaël, additional, Misic, Bratislav, additional, Evans, Alan C., additional, and Glatard, Tristan, additional
- Published
- 2020
- Full Text
- View/download PDF
13. CERE
- Author
-
Castro, Pablo De Oliveira, primary, Akel, Chadi, additional, Petit, Eric, additional, Popov, Mihail, additional, and Jalby, William, additional
- Published
- 2015
- Full Text
- View/download PDF
14. PCERE: Fine-Grained Parallel Benchmark Decomposition for Scalability Prediction.
- Author
-
Popov, Mihail, Akel, Chadi, Conti, Florent, Jalby, William, and Castro, Pablo de Oliveira
- Published
- 2015
- Full Text
- View/download PDF
15. Is Source-Code Isolation Viable for Performance Characterization?
- Author
-
Akel, Chadi, primary, Kashnikov, Yuriy, additional, Castro, Pablo de Oliveira, additional, and Jalby, William, additional
- Published
- 2013
- Full Text
- View/download PDF
16. Transactional Actors: Communication in Transactions
- Author
-
Wolfgang De Meuter, Joeri De Koster, Janwillem Swalens, Jannesari, Ali, Mattson, Tim, Castro, Pablo de Oliveira, Sato, Yukinori, Informatics and Applied Informatics, Faculty of Sciences and Bioengineering Sciences, and Software Languages Lab
- Subjects
020203 distributed computing ,actors ,business.industry ,Non-lock concurrency control ,Computer science ,Distributed computing ,Distributed concurrency control ,Multiversion concurrency control ,Transactional memory ,Parallelism ,020207 software engineering ,02 engineering and technology ,software transactional memory ,Concurrent object-oriented programming ,Computational Theory and Mathematics ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Software transactional memory ,concurrency ,Isolation (database systems) ,Software engineering ,business ,Optimistic concurrency control ,Software - Abstract
Developers often require different concurrency models to fit the various concurrency needs of the different parts of their applications. Many programming languages, such as Clojure, Scala, and Haskell, cater to this need by incorporating different concurrency models. It has been shown that, in practice, developers often combine these concurrency models. However, they are often combined in an ad hoc way and the semantics of the combination is not always well-defined. The starting hypothesis of this paper is that different concurrency models need to be carefully integrated such that the properties of each individual model are still maintained. This paper proposes one such combination, namely the combination of the actor model and software transactional memory. In this paper we show that, while both individual models offer strong safety guarantees, these guarantees are no longer valid when they are combined. The main contribution of this paper is a novel hybrid concurrency model called transactional actors that combines both models while preserving their guarantees. This paper also presents an implementation in Clojure and an experimental evaluation of the performance of the transactional actor model.
- Published
- 2017
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.