Back to Search
Start Over
Effective human–object interaction recognition for edge devices in intelligent space
- Source :
- SICE Journal of Control, Measurement, and System Integration, Vol 17, Iss 1, Pp 1-9 (2024)
- Publication Year :
- 2024
- Publisher :
- Taylor & Francis Group, 2024.
-
Abstract
- To enable machines to understand human-centric images and videos, they need the capability to detect human–object interactions. This capability has been studied using various approaches, but previous research has mainly focused only on recognition accuracy using widely used open datasets. Given the need for advanced machine-learning systems that provide spatial analysis and services, the recognition model should be robust to various changes, have high extensibility, and provide sufficient recognition speed even with minimal computational overhead. Therefore, we propose a novel method that combines the skeletal method with object detection to accurately predict a set of $ \langle $ human, verb, object $ \rangle $ triplets in a video frame considering the robustness, extensibility, and lightweight of the model. Training a model with similar perceptual elements to those of humans produces sufficient accuracy for advanced social systems, even with only a small training dataset. The proposed model is trained using only the coordinates of the object and human landmarks, making it robust to various situations and lightweight compared with deep-learning methods. In the experiment, a scenario in which a human is working on a desk is simulated and an algorithm is trained on object-specific interactions. The accuracy of the proposed model was evaluated using various types of datasets.
Details
- Language :
- English
- ISSN :
- 18849970 and 18824889
- Volume :
- 17
- Issue :
- 1
- Database :
- Directory of Open Access Journals
- Journal :
- SICE Journal of Control, Measurement, and System Integration
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.424517c3243ac8869b5d653634c79
- Document Type :
- article
- Full Text :
- https://doi.org/10.1080/18824889.2023.2292353