Back to Search
Start Over
Improving the numerical stability of fast matrix multiplication
- Source :
- SIAM Journal on Matrix Analysis and Applications, vol 37, iss 4, Ballard, G; Benson, AR; Druinsky, A; Lipshitz, B; & Schwartz, O. (2016). Improving the numerical stability of fast matrix multiplication. SIAM Journal on Matrix Analysis and Applications, 37(4), 1382-1418. doi: 10.1137/15M1032168. Lawrence Berkeley National Laboratory: Retrieved from: http://www.escholarship.org/uc/item/26v4646n
- Publication Year :
- 2016
- Publisher :
- eScholarship, University of California, 2016.
-
Abstract
- © 2016 Sandia Corporation, operator of Sandia National Laboratories for the U.S. Department of Energy. Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar operations than the classical algorithm, have been considered primarily of theoretical interest. Apart from Strassen's original algorithm, few fast algorithms have been efficiently implemented or used in practical applications. However, there exist many practical alternatives to Strassen's algorithm with varying performance and numerical properties. Fast algorithms are known to be numerically stable, but because their error bounds are slightly weaker than the classical algorithm, they are not used even in cases where they provide a performance benefit. We argue in this paper that the numerical sacrifice of fast algorithms, particularly for the typical use cases of practical algorithms, is not prohibitive, and we explore ways to improve the accuracy both theoretically and empirically. The numerical accuracy of fast matrix multiplication depends on properties of the algorithm and of the input matrices, and we consider both contributions independently. We generalize and tighten previous error analyses of fast algorithms and compare their properties. We discuss algorithmic techniques for improving the error guarantees from two perspectives: manipulating the algorithms, and reducing input anomalies by various forms of diagonal scaling. Finally, we benchmark performance and demonstrate our improved numerical accuracy.
- Subjects :
- Diagonal scaling
Mathematical optimization
Freivalds' algorithm
Numerical and Computational Mathematics
Applied Mathematics
Scalar (mathematics)
Computer Science - Numerical Analysis
Numerical & Computational Mathematics
Numerical Analysis (math.NA)
010103 numerical & computational mathematics
0102 computer and information sciences
01 natural sciences
Matrix multiplication
010201 computation theory & mathematics
Strassen algorithm
FOS: Mathematics
Use case
0101 mathematics
Algorithm
Analysis
cs.NA
Mathematics
Numerical stability
Coppersmith–Winograd algorithm
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- SIAM Journal on Matrix Analysis and Applications, vol 37, iss 4, Ballard, G; Benson, AR; Druinsky, A; Lipshitz, B; & Schwartz, O. (2016). Improving the numerical stability of fast matrix multiplication. SIAM Journal on Matrix Analysis and Applications, 37(4), 1382-1418. doi: 10.1137/15M1032168. Lawrence Berkeley National Laboratory: Retrieved from: http://www.escholarship.org/uc/item/26v4646n
- Accession number :
- edsair.doi.dedup.....7e7297d609b8ee43f78c6fd442fbff42