Back to Search Start Over

A Methodology to Design Quantized Deep Neural Networks for Automatic Modulation Recognition

Authors :
David Góez
Paola Soto
Steven Latré
Natalia Gaviria
Miguel Camelo
Source :
Algorithms, Vol 15, Iss 12, p 441 (2022)
Publication Year :
2022
Publisher :
MDPI AG, 2022.

Abstract

Next-generation communication systems will face new challenges related to efficiently managing the available resources, such as the radio spectrum. DL is one of the optimization approaches to address and solve these challenges. However, there is a gap between research and industry. Most AI models that solve communication problems cannot be implemented in current communication devices due to their high computational capacity requirements. New approaches seek to reduce the size of DL models through quantization techniques, changing the traditional method of operations from a 32 (or 64) floating-point representation to a fixed point (usually small) one. However, there is no analytical method to determine the level of quantification that can be used to obtain the best trade-off between the reduction of computational costs and an acceptable accuracy in a specific problem. In this work, we propose an analysis methodology to determine the degree of quantization in a DNN model to solve the problem of AMR in a radio system. We use the Brevitas framework to build and analyze different quantized variants of the DL architecture VGG10 adapted to the AMR problem. The evaluation of the computational cost is performed with the FINN framework of Xilinx Research Labs to obtain the computational inference cost. The proposed design methodology allows us to obtain the combination of quantization bits per layer that provides an optimal trade-off between the model performance (i.e., accuracy) and the model complexity (i.e., size) according to a set of weights associated with each optimization objective. For example, using the proposed methodology, we found a model architecture that reduced 75.8% of the model size compared to the non-quantized baseline model, with a performance degradation of only 0.06%.

Details

Language :
English
ISSN :
19994893
Volume :
15
Issue :
12
Database :
Directory of Open Access Journals
Journal :
Algorithms
Publication Type :
Academic Journal
Accession number :
edsdoj.827773dea3ff4cafbc5b62750a23fa57
Document Type :
article
Full Text :
https://doi.org/10.3390/a15120441