Back to Search
Start Over
On-device AI: Quantization-aware Training of Transformers in Time-Series
- Publication Year :
- 2024
-
Abstract
- Artificial Intelligence (AI) models for time-series in pervasive computing keep getting larger and more complicated. The Transformer model is by far the most compelling of these AI models. However, it is difficult to obtain the desired performance when deploying such a massive model on a sensor device with limited resources. My research focuses on optimizing the Transformer model for time-series forecasting tasks. The optimized model will be deployed as hardware accelerators on embedded Field Programmable Gate Arrays (FPGAs). I will investigate the impact of applying Quantization-aware Training to the Transformer model to reduce its size and runtime memory footprint while maximizing the advantages of FPGAs.<br />Comment: This paper is accepted by 2023 IEEE International Conference on Pervasive Computing and Communications(PhD Forum)
- Subjects :
- Computer Science - Machine Learning
Computer Science - Artificial Intelligence
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2408.16495
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1109/PerComWorkshops56833.2023.10150339