1. Extending multimedia languages to support multimodal user interactions
- Author
-
Simone Diniz Junqueira Barbosa, Álan L. V. Guedes, and Roberto Gerson de Albuquerque Azevedo
- Subjects
Multimedia ,Computer Networks and Communications ,Computer science ,business.industry ,020207 software engineering ,IPTV ,02 engineering and technology ,Nested context language ,computer.software_genre ,Multimodal interaction ,Software framework ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,020201 artificial intelligence & image processing ,User interface ,business ,computer ,Software ,Interactive media ,Declarative programming ,Gesture - Abstract
Historically, the Multimedia community research has focused on output modalities, through studies on timing and multimedia processing. The Multimodal Interaction community, on the other hand, has focused on user-generated modalities, through studies on Multimodal User Interfaces (MUI). In this paper, aiming to assist the development of multimedia applications with MUIs, we propose the integration of concepts from those two communities in a unique high-level programming framework. The framework integrates user modalities --both user-generated (e.g., speech, gestures) and user-consumed (e.g., audiovisual, haptic)-- in declarative programming languages for the specification of interactive multimedia applications. To illustrate our approach, we instantiate the framework in the NCL (Nested Context Language) multimedia language. NCL is the declarative language for developing interactive applications for Brazilian Digital TV and an ITU-T Recommendation for IPTV services. To help evaluate our approach, we discuss a usage scenario and implement it as an NCL application extended with the proposed multimodal features. Also, we compare the expressiveness of the multimodal NCL against existing multimedia and multimodal languages, for both input and output modalities.
- Published
- 2016
- Full Text
- View/download PDF