Back to Search
Start Over
Variable Size for Recurrent Attention Model and Application Research.
- Source :
- Journal of Computer Engineering & Applications; Jun2022, Vol. 58 Issue 12, p243-248, 6p
- Publication Year :
- 2022
-
Abstract
- Visual attention model has been applied to image recognition tasks which autolocate discriminative local part of fine-grained image to capture different features, but input image size is fixed and the size of discriminative part is uncertain, so model cannot capture all features of image precisely and the classification accuracy is reduced. This paper proposes a variable size recurrent attention network (VSRAM), different from previous fixed input size, recurrent attention network (RAM), the VSRAM optimizes attention policy and size sampling policy to learn the position and size for next input image by itself, reduces total input image areas and increases processing speed. Experimental results show that, dynamically adjusting the size of input image can achieve the same recognition accuracy as RAM, but efficiently reduce the total input image area and increase speed. [ABSTRACT FROM AUTHOR]
- Subjects :
- MACHINE learning
REINFORCEMENT learning
Subjects
Details
- Language :
- Chinese
- ISSN :
- 10028331
- Volume :
- 58
- Issue :
- 12
- Database :
- Complementary Index
- Journal :
- Journal of Computer Engineering & Applications
- Publication Type :
- Academic Journal
- Accession number :
- 157603742
- Full Text :
- https://doi.org/10.3778/j.issn.1002-8331.2012-0056