Back to Search
Start Over
Exploring Temporal Dependencies in Multimodal Referring Expressions with Mixed Reality
- Source :
- Virtual, Augmented and Mixed Reality. Applications and Case Studies ISBN: 9783030215644, HCI (10)
- Publication Year :
- 2019
- Publisher :
- Springer International Publishing, 2019.
-
Abstract
- In collaborative tasks, people rely both on verbal and non-verbal cues simultaneously to communicate with each other. For human-robot interaction to run smoothly and naturally, a robot should be equipped with the ability to robustly disambiguate referring expressions. In this work, we propose a model that can disambiguate multimodal fetching requests using modalities such as head movements, hand gestures, and speech. We analysed the acquired data from mixed reality experiments and formulated a hypothesis that modelling temporal dependencies of events in these three modalities increases the model’s predictive power. We evaluated our model on a Bayesian framework to interpret referring expressions with and without exploiting the temporal prior.
- Subjects :
- Computer science
Human–computer interaction
0103 physical sciences
0202 electrical engineering, electronic engineering, information engineering
Robot
020201 artificial intelligence & image processing
02 engineering and technology
010306 general physics
01 natural sciences
Mixed reality
Multimodal interaction
Human–robot interaction
Gesture
Subjects
Details
- ISBN :
- 978-3-030-21564-4
- ISBNs :
- 9783030215644
- Database :
- OpenAIRE
- Journal :
- Virtual, Augmented and Mixed Reality. Applications and Case Studies ISBN: 9783030215644, HCI (10)
- Accession number :
- edsair.doi...........54dd1fb03990730e0983458808b71635
- Full Text :
- https://doi.org/10.1007/978-3-030-21565-1_8