Back to Search
Start Over
Exploration of block-wise dynamic sparseness
- Source :
- PATTERN RECOGNITION LETTERS
- Publication Year :
- 2021
- Publisher :
- Elsevier BV, 2021.
-
Abstract
- Neural networks have achieved state of the art performance across a wide variety of machine learning tasks, often with large and computation-heavy models. Inducing sparseness as a way to reduce the memory and computation footprint of these models has seen significant research attention in recent years. In this paper, we present a new method for dynamic sparseness, whereby part of the computations are omitted dynamically, based on the input. For efficiency, we combined the idea of dynamic sparseness with block-wise matrix-vector multiplications. In contrast to static sparseness, which permanently zeroes out selected positions in weight matrices, our method preserves the full network capabilities by potentially accessing any trained weights. Yet, matrix vector multiplications are accelerated by omitting a pre-defined fraction of weight blocks from the matrix, based on the input. Experimental results on the task of language modeling, using recurrent and quasi-recurrent models, show that the proposed method can outperform static sparseness baselines. In addition, our method can reach similar language modeling perplexities as the dense baseline, at half the computational cost at inference time.
- Subjects :
- Technology and Engineering
Artificial neural network
Computer science
Computation
Inference
Neural network
Matrix (mathematics)
Dynamic sparseness
Artificial Intelligence
Signal Processing
Fraction (mathematics)
Block-wise matrix multiplication
Computer Vision and Pattern Recognition
Language model
State (computer science)
Algorithm
Software
Block (data storage)
Subjects
Details
- ISSN :
- 01678655 and 18727344
- Volume :
- 151
- Database :
- OpenAIRE
- Journal :
- Pattern Recognition Letters
- Accession number :
- edsair.doi.dedup.....f6334bf4bc2e4c4d1f79494444965b9a