Back to Search
Start Over
OFIDA: Object-focused image data augmentation with attention-driven graph convolutional networks.
- Source :
- PLoS ONE, Vol 19, Iss 5, p e0302124 (2024)
- Publication Year :
- 2024
- Publisher :
- Public Library of Science (PLoS), 2024.
-
Abstract
- Image data augmentation plays a crucial role in data augmentation (DA) by increasing the quantity and diversity of labeled training data. However, existing methods have limitations. Notably, techniques like image manipulation, erasing, and mixing can distort images, compromising data quality. Accurate representation of objects without confusion is a challenge in methods like auto augment and feature augmentation. Preserving fine details and spatial relationships also proves difficult in certain techniques, as seen in deep generative models. To address these limitations, we propose OFIDA, an object-focused image data augmentation algorithm. OFIDA implements one-to-many enhancements that not only preserve essential target regions but also elevate the authenticity of simulating real-world settings and data distributions. Specifically, OFIDA utilizes a graph-based structure and object detection to streamline augmentation. Specifically, by leveraging graph properties like connectivity and hierarchy, it captures object essence and context for improved comprehension in real-world scenarios. Then, we introduce DynamicFocusNet, a novel object detection algorithm built on the graph framework. DynamicFocusNet merges dynamic graph convolutions and attention mechanisms to flexibly adjust receptive fields. Finally, the detected target images are extracted to facilitate one-to-many data augmentation. Experimental results validate the superiority of our OFIDA method over state-of-the-art methods across six benchmark datasets.
Details
- Language :
- English
- ISSN :
- 19326203
- Volume :
- 19
- Issue :
- 5
- Database :
- Directory of Open Access Journals
- Journal :
- PLoS ONE
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.90481e9d1c5449e2bccbd1dc5e78fe4b
- Document Type :
- article
- Full Text :
- https://doi.org/10.1371/journal.pone.0302124&type=printable