1,714 results on '"interaction technique"'
Search Results
152. Perceptual User Interfaces
- Author
-
Turk, Matthew, Earnshaw, Rae A., editor, Guedj, Richard A., editor, Dam, Andries van, editor, and Vince, John A., editor
- Published
- 2001
- Full Text
- View/download PDF
153. Usability Evaluation
- Author
-
Paternò, Fabio, Paul, Ray J., editor, Thomas, Peter J., editor, Kuljis, Jasna, editor, and Paternò, Fabio
- Published
- 2000
- Full Text
- View/download PDF
154. Task-Based Design
- Author
-
Paternò, Fabio, Paul, Ray J., editor, Thomas, Peter J., editor, Kuljis, Jasna, editor, and Paternò, Fabio
- Published
- 2000
- Full Text
- View/download PDF
155. Dynamics in Interaction on the Responsive Workbench
- Author
-
Koutek, Michal, Post, Frits H., Hansmann, W., editor, Purgathofer, W., editor, Sillion, F., editor, Mulder, Jurriaan, editor, and van Liere, Robert, editor
- Published
- 2000
- Full Text
- View/download PDF
156. Developing a Novel Hands-Free Interaction Technique Based on Nose and Teeth Movements for Using Mobile Devices
- Author
-
Mahmud Sarwar, Rafiur Rahman Khan, A. K. M. Najmul Islam, Shamima Nasrin, Muhammad Nazrul Islam, Md. Mahadi Hassan Munna, and Shadman Aadeeb
- Subjects
General Computer Science ,Computer science ,Scroll ,mobile device ,02 engineering and technology ,human-mobile interaction ,Cursor (databases) ,Lucas–Kanade method ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,Computer vision ,HCI ,End user ,business.industry ,General Engineering ,020207 software engineering ,gesture operations ,Interaction technique ,accessibility ,TK1-9971 ,Haar-like features ,Face (geometry) ,disabled user ,020201 artificial intelligence & image processing ,Electrical engineering. Electronics. Nuclear engineering ,Artificial intelligence ,business ,Mobile device - Abstract
Human-mobile interaction is aimed at facilitating interaction with the smartphone devices. The conventional way to interact with mobile devices is through manual input where most of the applications are made assuming that the end user has full control over their hand movements. However, this assumption excludes people who are unable to use their hands or have suffered limb damage. In this paper, we proposed a nose and teeth based interaction system, which allows the users to control their mobile devices completely hands free. The proposed system uses the front facing camera of the smartphone to track the position of the nose for cursor control on the smartphone screen. The system detects teeth for performing the touch screen events such as tap, scroll, long press, and drag. Viola-Jones algorithm is used to detect the face and teeth based on the Haar features. After detecting the face, the nose position is calculated and tracked continuously using Lucas Kanade’s method for optical flow estimation. All the touch screen events have been implemented in the system so that the user can execute all the operations of the smartphone. To evaluate the performance and the effect of (smartphone) device type on the execution time, the proposed system was installed in 3 smartphone devices and 7 trials for each device were performed by 3 different able-bodied elderly persons. The result shows a significant success rate for the detection of nose and teeth, and for the execution of the operations. The execution time of each operation slightly varies by 0.72s on average because of the configuration of the smartphones.
- Published
- 2021
- Full Text
- View/download PDF
157. UML and User Interface Modeling
- Author
-
Kovacevic, Srdjan, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Bézivin, Jean, editor, and Muller, Pierre-Alain, editor
- Published
- 1999
- Full Text
- View/download PDF
158. Design and evaluation of fusion approach for combining brain and gaze inputs for target selection.
- Author
-
Andeol Evain, Ferran Argelaguet, Gery Casiez, Nicolas Roussel, and Anatole Lecuyer
- Subjects
BCI ,gaze tracking ,Interaction technique ,Multiple input ,Hybrid interaction ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Gaze-based interfaces and Brain-Computer Interfaces (BCIs) allow for hands-free human-computer interaction. In this paper, we investigate the combination of gaze and brain-computer interfaces. We propose a novel selection technique for 2D target acquisition based on input fusion. This new approach combines the probabilistic models for each input, in order to better estimate the intent of the user. We evaluated its performance against the existing gaze and brain-computer interaction techniques. Twelve participants took part in our study, in which they had to search and select 2D targets with each of the evaluated techniques. Our fusion-based hybrid interaction technique was found to be more reliable than the previous gaze and BCI hybrid interaction techniques for 10 participants over 12, while being 29% faster on average. However, similarly to what has been observed in hybrid gaze-and-speech interaction, gaze-only interaction technique still provides the best performance. Our results should encourage the use of input fusion, as opposed to sequential interaction, in order to design better hybrid interfaces.
- Published
- 2016
- Full Text
- View/download PDF
159. ScraTouch
- Author
-
Shota Yamanaka and Kaori Ikematsu
- Subjects
Millisecond ,Modality (human–computer interaction) ,Computer Networks and Communications ,business.industry ,Computer science ,Capacitive sensing ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Interaction technique ,Touchpad ,Human-Computer Interaction ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Mode switching ,0501 psychology and cognitive sciences ,Computer vision ,Artificial intelligence ,business ,050107 human factors ,Shunt (electrical) - Abstract
We present ScraTouch, an interaction technique using fingernails, as a new input modality by leveraging capacitive touch sensing. Differentiating between fingertip and fingernail touches requires only tens of milliseconds worth of shunt current data from unmodified capacitive touch surfaces, thus requires no hardware modification. ScraTouch is simple but practical technique for command invocation and mode switching. An evaluation using a point-and-select task on a touchpad showed that although the switching between the finger and nail in ScraTouch required a little more time compared with the baseline (finger touching without mode switching), in overall the operations, ScraTouch was just as fast as the baseline, and on average, 29 % faster than a long press with 500-ms threshold. We also confirmed that setting a simple threshold on the measured shunt current for recognition works robustly across users (97 % accuracy).
- Published
- 2020
- Full Text
- View/download PDF
160. Usability test with medical personnel of a hand-gesture control techniques for surgical environment
- Author
-
Gabriel Pedraza, Luis Eduardo Bautista, and Fernanda Maradei
- Subjects
medicine.medical_specialty ,Computer science ,business.industry ,05 social sciences ,Usability ,Interaction technique ,Industrial and Manufacturing Engineering ,030218 nuclear medicine & medical imaging ,Test (assessment) ,03 medical and health sciences ,0302 clinical medicine ,Human–computer interaction ,Gesture recognition ,Industrial design ,Modeling and Simulation ,Orthopedic surgery ,medicine ,0501 psychology and cognitive sciences ,Augmented reality ,Engineering design process ,business ,050107 human factors - Abstract
Computer-assisted orthopedic surgery (CAOS) and augmented reality provides guidance for surgical environments in minimally invasive procedures. However, the surgeon’s interaction with those systems has limitations, because the surgical environment must be kept sterile. Two touch-less interaction techniques were evaluated within this research work, a vision-based technique used by META® glasses and electromyography-based used by MYO® arm-band. The research aim was to establish the most appropriate one touch-less interaction technique to manipulate a CAOS system. Usability was evaluated through an experiment with 4 orthopedic surgeons and 47 undergraduate students, in a simulated setting. The results suggest that both techniques can be relevant and useful in a surgical environment. However, the MYO was more efficient and possibly more effective to perform the manipulation tasks than META. In addition, the MYO arm-band was perceived as further satisfactory compared to META glasses and shown an improved overall behaviour in the CAOS system manipulations tasks.
- Published
- 2020
- Full Text
- View/download PDF
161. Novel hands-free interaction techniques based on the software switch approach for computer access with head movements
- Author
-
Cagdas Esiyok, Aliye Tosun, Ayhan Aşkın, and Sahin Albayrak
- Subjects
030506 rehabilitation ,Computer Networks and Communications ,Computer science ,business.industry ,Head (linguistics) ,Computer access ,Control (management) ,020206 networking & telecommunications ,Usability ,Gyroscope ,02 engineering and technology ,Interaction technique ,Task (project management) ,law.invention ,Human-Computer Interaction ,03 medical and health sciences ,law ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Head movements ,ddc:004 ,0305 other medical science ,business ,Software ,Information Systems - Abstract
Head-operated computer accessibility tools (CATs) are useful solutions for the ones with complete head control; but when it comes to people with only reduced head control, computer access becomes a very challenging task since the users depend on a single head-gesture like a head nod or a head tilt to interact with a computer. It is obvious that any new interaction technique based on a single head-gesture will play an important role to develop better CATs to enhance the users’ self-sufficiency and the quality of life. Therefore, we proposed two novel interaction techniques namely HeadCam and HeadGyro within this study. In a nutshell, both interaction techniques are based on our software switch approach and can serve like traditional switches by recognizing head movements via a standard camera or a gyroscope sensor of a smartphone to translate them into virtual switch presses. A usability study with 36 participants (18 motor-impaired, 18 able-bodied) was also conducted to collect both objective and subjective evaluation data in this study. While HeadGyro software switch exhibited slightly higher performance than HeadCam for each objective evaluation metrics, HeadCam was rated better in subjective evaluation. All participants agreed that the proposed interaction techniques are promising solutions for computer access task.
- Published
- 2020
- Full Text
- View/download PDF
162. Étude de l'influence de la taille des sphères virtuelles de contrôle sur les rotations 3D
- Author
-
Antoine, Axel, Malacria, Sylvain, Vogel, Daniel, Casiez, Géry, Technology and knowledge for interaction (LOKI), Inria Lille - Nord Europe, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 (CRIStAL), Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS), Cheriton School of Computer Science [Waterloo] (CS), University of Waterloo [Waterloo], and AFIHM
- Subjects
Techniques d’interaction ,3D rotations ,rotations 3D ,sphère virtuelle de contrôle ,virtual trackball ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Interaction technique - Abstract
International audience; Rotating 3D objects on desktop computers with a mouse or a trackpad is a notoriously difficult task, especially for novice users. Techniques relying on a “virtual trackball” have been proposed in the literature and these continue to be used in most 3D software. While several studies were conducted to compare the performance of these techniques, none was focused on the intrinsic parameter of the radius of the virtual control sphere of the trackball. We present the results of a controlled study to investigate the influence of the radius of the control sphere on the performance and behavior of users in a 3D docking task. Surprisingly, the results do not suggest a significant effect of the size of the virtual control sphere on user performance. However, an analysis of user behavior suggests that it influences the user’s strategy for how they interact with the virtual trackball.; La rotation d’objets 3D sur ordinateurs de bureau avec une souris ou un pavé tactile est une tâche connue pour sa difficulté, en particulier pour les utilisateurs débutants. Différentes techniques de sphères virtuelles ont été proposées dans la littérature pour faciliter cette tâche et ces techniques continuent d’être utilisées dans la plupart des logiciels 3D. Différentes études ont été réalisées dans la littérature afin de comparer la performance de ces techniques. Cependant aucune ne s’est intéressée à un paramètre intrinsèque de ces techniques qui est le rayon de la sphère virtuelle de contrôle. Nous présentons dans cet article les résultats d’une étude contrôlée dans le but d’étudier l’influence du rayon de la sphère virtuelle sur les performances et le comportement des utilisateurs dans une tâche d’ancrage 3D. Les résultats de cette étude n’ont pas montré d’effet significatif de la taille de sphère virtuelle de contrôle sur les performances des utilisateurs. Cependant une analyse du comportement des utilisateurs montre que la taille de la sphère de contrôle peut influencer la stratégie des participants.
- Published
- 2022
- Full Text
- View/download PDF
163. Model-based software engineering for interactive systems
- Author
-
Märtin, Christian and Albrecht, Rudolf, editor
- Published
- 1998
- Full Text
- View/download PDF
164. Using Declarative Descriptions to Model User Interfaces with MASTERMIND
- Author
-
Palanque, Philippe, Paternò, Fabio, Schuman, S. A., editor, Palanque, Philippe, editor, and Paternò, Fabio, editor
- Published
- 1998
- Full Text
- View/download PDF
165. Adaptable and adaptive user interfaces for disabled users in the AVANTI project
- Author
-
Stephanidis, C., Paramythis, A., Sfyrakis, M., Stergiou, A., Maou, N., Leventis, A., Paparoulis, G., Karagiannidis, C., Goos, G., editor, Hartmanis, J., editor, van Leeuwen, J., editor, Trigila, Sebastiano, editor, Mullery, Al, editor, Campolargo, Mario, editor, Vanderstraeten, Hans, editor, and Mampaey, Marcel, editor
- Published
- 1998
- Full Text
- View/download PDF
166. In Search for an Ideal Computer-Assisted Drawing System
- Author
-
Igarashi, Takeo, Kawachiya, Sachiko, Matsuoka, Satoshi, Tanaka, Hidehiko, Howard, Steve, editor, Hammond, Judy, editor, and Lindgaard, Gitte, editor
- Published
- 1997
- Full Text
- View/download PDF
167. The Interaction Specification Workspace: Specifying and Designing the Interaction Issues of Virtual Reality Training Environments From Within
- Author
-
Diplas, C. N., Kameas, A. D., Pintelas, P. E., Hansmann, W., editor, Hewitt, W. T., editor, Purgathofer, W., editor, Harrison, Michael Douglas, editor, and Torres, Juan Carlos, editor
- Published
- 1997
- Full Text
- View/download PDF
168. Virtual Environments meet Architectural Design Requirements
- Author
-
Teixeira, José Carlos, Figueiredo, Mauro, ZGDV, Zentrum für Graphische Datenverarbeitung e. V., Teixeira, José C., editor, and Rix, Joachim, editor
- Published
- 1996
- Full Text
- View/download PDF
169. A Model-Based User Interface Architecture: Enhancing a Runtime Environment with Declarative Knowledge
- Author
-
Sukaviriya, Piyawadee “Noi”, Muthukumarasamy, Jayakumar, Frank, Martin, Foley, James D., Hewitt, W. T., editor, Hansmann, W., editor, and Paternó, Fabio, editor
- Published
- 1995
- Full Text
- View/download PDF
170. An Object-Oriented Architecture for Direct Manipulation Based Interactive Graphic Applications: The MAGOO Architecture
- Author
-
Rui Gomes, Mário, Casteleiro, Rui Pedro, Vasconcelos, Fernando, Hewitt, W. T., editor, Gnatz, R., editor, Hansmann, W., editor, Laffra, Chris, editor, Blake, Edwin H., editor, de Mey, Vicki, editor, and Pintado, Xavier, editor
- Published
- 1995
- Full Text
- View/download PDF
171. Towards a Unified and Efficient Command Selection Mechanism for Touch-Based Devices Using Soft Keyboard Hotkeys
- Author
-
Katherine Fennedy, Angad Srivastava, Sylvain Malacria, Simon T. Perrault, Singapore University of Technology and Design (SUTD), Human-Computer Interaction Laboratory ( NUS-HCI Lab), National University of Singapore (NUS), Yale-NUS College, Technology and knowledge for interaction (LOKI), Inria Lille - Nord Europe, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 (CRIStAL), Université de Lille-Centrale Lille-Centre National de la Recherche Scientifique (CNRS)-Université de Lille-Centrale Lille-Centre National de la Recherche Scientifique (CNRS), This research/project is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award N°: AISG2-RP-2020-016). This work is also made possible by the Agence Nationale de la Recherche (Discovery, ANR-19-CE33-0006) and by the Singapore Ministry of Education and Singapore University of Technology and Design (SUTD) Start-up Research Grant (T1SRIS18141)., ANR-19-CE33-0006,Discovery,Promouvoir et améliorer la découverte des fonctionnalités et des interactions dans les systèmes interactifs(2019), and Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)
- Subjects
hotkeys ,keyboard shortcuts ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Human-Computer Interaction ,mobile devices ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,interaction technique ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,050107 human factors ,touch-based devices ,command selection - Abstract
International audience; We advocate for the usage of hotkeys on touch-based devices by capitalising on soft keyboards through four studies. First, we evaluated visual designs and recommended icons with command names for novices while letters with command names for experts. Second, we investigated the discoverability by asking crowdworkers to use our prototype, with some tasks only doable upon successfully discovering the technique. Discovery rates were high regardless of conditions that vary the familiarity and saliency of modifier keys. However, familiarity with desktop hotkeys boosted discoverability. Our third study focused on how prior knowledge of hotkeys could be leveraged and resulted in a 5% selection time improvement and identified the role of spatial memory in retention. Finally, we compared our soft keyboard layout with a grid layout similar to FastTap. The latter offered a 12-16% gain on selection speed, but at a high cost in terms of screen estate and low spatial stability.
- Published
- 2022
- Full Text
- View/download PDF
172. Design Objects, Modules, and Models
- Author
-
Treu, Siegfried, Chang, Shi-Kuo, editor, and Treu, Siegfried
- Published
- 1994
- Full Text
- View/download PDF
173. Interface Structures
- Author
-
Treu, Siegfried, Chang, Shi-Kuo, editor, and Treu, Siegfried
- Published
- 1994
- Full Text
- View/download PDF
174. Supportive Tools and Techniques
- Author
-
Treu, Siegfried, Chang, Shi-Kuo, editor, and Treu, Siegfried
- Published
- 1994
- Full Text
- View/download PDF
175. Interaction Characteristics and Options
- Author
-
Treu, Siegfried, Chang, Shi-Kuo, editor, and Treu, Siegfried
- Published
- 1994
- Full Text
- View/download PDF
176. Tactison: a multimedia learning tool for blind children
- Author
-
Dominique, Burger, Amina, Bouraoui, Christian, Mazurier, Serge, Cesarano, Jack, Sagot, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Zagler, Wolfgang L., editor, Busby, Geoffrey, editor, and Wagner, Roland R., editor
- Published
- 1994
- Full Text
- View/download PDF
177. Techniques d’interaction pour la sélection d’objets et le contrôle d’expressions faciales en réalité virtuelle
- Author
-
Baloup, Marc, Technology and knowledge for interaction (LOKI), Inria Lille - Nord Europe, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 (CRIStAL), Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS), Université de Lille, Géry Casiez, Inria IPL AVATAR, and Université de Lille, INRIA, CRIStAL CNRS
- Subjects
Human-Computer Interaction ,Facial expression ,Réalité virtuelle ,Technique d’interaction ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Pointage ,Interaction Humain-Machine ,Interaction technique ,Virtual reality ,Sélection d’objets ,Expression faciale ,Pointing - Abstract
Virtual reality technologies are evolving fast with the development of headsets that have increased display resolutions and motion tracking capabilities. These headsets are associated with controllers with many degrees of freedom that offer the possibility to improve and develop new interaction techniques. Indeed, interaction in these immersive environments is often more laborious than in reality. In this thesis, we focused on the improvement of the selection task and the development of techniques dedicated to the control of facial expressions.For the 3D object selection task, the virtual hand and Raycasting are the most frequently used techniques. Although Raycasting allows selecting objects out of reach, the selection is more difficult when the objects are far away, small or partially occluded. We have highlighted how to reduce the selection errors of Raycasting by using an adapted filtering.We also developed RayCursor, an improvement of Raycasting that relies on adding a cursor that the user can move along the ray. This cursor allows Raycasting to be combined with proximity-based pointing facilitation techniques. We studied a set of visual feedbacks adapted to the visualization and selection of nearby targets. A series of controlled experiments highlighted the benefits of RayCursor over a set of reference techniques from the literature.For the control of facial expressions, we have proposed a set of non-isomorphic techniques to overcome the limitations of the isomorphic techniques proposed in the literature, which rely on the detection or estimation of the users' real expressions to transpose them to their avatar. It is difficult to control precisely expressions and it is impossible for users to control an expression different from the expression on their face. Our techniques are based on the decomposition of facial expression control into subtasks that we formalize in a design space: facial expression selection, intensity and duration control. Controlled experiments allowed us to isolate the most relevant techniques to perform each of the subtasks, in order to design a technique that we call EmoRayE, that was validated in an ecological experiment.; Les technologies de réalité virtuelle connaissent un essor sans précédent par le développement de casques qui possèdent des résolutions d'affichage et des capacités de suivi du mouvement inédits. Ces casques sont associés à des contrôleurs dotés de nombreux degrés de liberté qui offrent la possibilité d'améliorer et de développer de nouvelles techniques d'interaction. L'interaction dans ces environnements immersifs reste en effet bien souvent plus laborieuse que dans la réalité. Nous nous sommes concentrés dans cette thèse sur l'amélioration de la tâche de sélection et le développement de techniques dédiées au contrôle d'expressions faciales.Pour la tâche de sélection d’objets 3D, la main virtuelle et le Raycasting sont les techniques les plus fréquemment utilisées. Bien que Raycasting permette de sélectionner des objets hors de portée, la sélection est d'autant plus difficile que les objets sont éloignés, petits ou partiellement occultés. Nous avons mis en évidence comment réduire les erreurs de sélection de Raycasting par l'utilisation d'un filtrage adapté. Nous avons également développé RayCursor, une amélioration de Raycasting qui repose sur l'ajout d'un curseur que l’utilisateur peut déplacer le long du rayon. Ce curseur permet de combiner Raycasting avec des techniques de facilitation du pointage par proximité. Nous avons étudié un ensemble de retours visuels adaptés à la visualisation et la sélection de cibles à proximité. Une série d'expériences contrôlées a permis de mettre en évidence les bénéfices de RayCursor par rapport à un ensemble de techniques de référence de la littérature.Pour le contrôle d'expressions faciales, nous avons proposé un ensemble de techniques non isomophiques pour dépasser les limitations des techniques isomorphiques proposées dans la littérature, qui reposent sur la détection ou l'estimation de l'expression réelle de l'utilisateur pour la transposer à son avatar. Le contrôle précis d'expressions est difficile et il est impossible à l'utilisateur de contrôler une expression autre que celle de son visage. Les techniques que nous proposons reposent sur une décomposition du contrôle d'expressions faciales en sous-tâches que nous formalisons dans un espace de conception : la sélection de l’expression faciale, le contrôle de son intensité et de sa durée. Des expériences contrôlées ont permis d'isoler les techniques les plus pertinentes pour réaliser chacune des sous-tâches, afin de concevoir une technique que nous appelons EmoRayE. Celle-ci a été validée dans une expérience écologique.
- Published
- 2021
178. Non-isomorphic Interaction Techniques for Controlling Avatar Facial Expressions in VR
- Author
-
Thomas Pietrzak, Martin Hachet, Géry Casiez, Marc Baloup, Technology and knowledge for interaction (LOKI), Inria Lille - Nord Europe, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 (CRIStAL), Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS), Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 (CRIStAL), Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS), Université de Lille, Popular interaction with 3d content (Potioc), Laboratoire Bordelais de Recherche en Informatique (LaBRI), Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Inria Bordeaux - Sud-Ouest, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Institut Universitaire de France (IUF), Ministère de l'Education nationale, de l’Enseignement supérieur et de la Recherche (M.E.N.E.S.R.), Inria challenge Avatar, ACM, and Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS)-Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS)-Inria Bordeaux - Sud-Ouest
- Subjects
VR ,Facial expression ,Computer science ,ACM: H.: Information Systems/H.5: INFORMATION INTERFACES AND PRESENTATION (e.g., HCI)/H.5.2: User Interfaces ,Input device ,02 engineering and technology ,Virtual reality ,Emoji ,Control theory ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,[INFO]Computer Science [cs] ,0501 psychology and cognitive sciences ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Set (psychology) ,050107 human factors ,Avatar ,Emotion ,Emoticons ,05 social sciences ,020207 software engineering ,Interaction technique ,ACM: H.: Information Systems/H.5: INFORMATION INTERFACES AND PRESENTATION (e.g., HCI)/H.5.1: Multimedia Information Systems/H.5.1.1: Artificial, augmented, and virtual realities ,[INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] ,Expression (mathematics) - Abstract
International audience; The control of an avatar’s facial expressions in virtual reality is mainly based on the automated recognition and transposition of the user’s facial expressions. These isomorphic techniques are limited to what users can convey with their own face and have recognition issues. To overcome these limitations, non-isomorphic techniques rely on interaction techniques using input devices to control the avatar’s facial expressions. Such techniques need to be designed to quickly and easily select and control an expression, and not disrupt a main task such as talking. We present the design of a set of new non- isomorphic interaction techniques for controlling an avatar facial expression in VR using a standard VR controller. These techniques have been evaluated through two controlled experiments to help designing an interaction technique combining the strengths of each approach. This technique was evaluated in a final ecological study showing it can be used in contexts such as social applications.
- Published
- 2021
- Full Text
- View/download PDF
179. Navigating in a process landscape
- Author
-
Tolsby, Haakon, Goos, G., editor, Hartmanis, J., editor, Bass, Leonard J., editor, Gornostaev, Juri, editor, and Unger, Claus, editor
- Published
- 1993
- Full Text
- View/download PDF
180. Worm Selector: Volume Selection in a 3D Point Cloud Through Adaptive Modelling.
- Author
-
Dubois, Emmanuel and Hamelin, Adrien
- Subjects
CLOUD computing ,3-D printers ,MECHANICAL movements - Abstract
3D point clouds are more and more widely used, especially because of the proliferation of manual and cheap 3D scanners and 3D printers. Due to the large size of the 3D point clouds, selecting part of them is very often required. Existing interaction techniques include ray/cone casting and predefined or free-form selection volume. In order to cope with the traditional trade-off between accuracy, ease of use and flexibility of these different forms of selection techniques in a 3D point cloud, we present the Worm Selector. It allows to select complex shapes while remaining simple to use and accurate. Using the Worm Selector relies on three principles: 1) points are selected by progressively constructing a cylinder-like shape (the adaptative worm) through the sequential definition of several sections; 2) a section is defined as a set of two contours linked together with straight lines; 3) each contour is a freely drawn closed shape. A user study reveals that the Worm Selector is significantly faster than a classical selection mechanism based on predefined volumes such as spheres or cuboids, while maintaining a comparable level of precision and recall. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
181. Design and Evaluation of Fusion Approach for Combining Brain and Gaze Inputs for Target Selection.
- Author
-
Évain, Andéol, Argelaguet, Ferran, Lécuyer, Anatole, Casiez, Géry, and Roussel, Nicolas
- Subjects
BRAIN-computer interfaces ,EYE tracking ,HUMAN-computer interaction - Abstract
Gaze-based interfaces and Brain-Computer Interfaces (BCIs) allow for hands-free human–computer interaction. In this paper, we investigate the combination of gaze and BCIs. We propose a novel selection technique for 2D target acquisition based on input fusion. This new approach combines the probabilistic models for each input, in order to better estimate the intent of the user. We evaluated its performance against the existing gaze and brain–computer interaction techniques. Twelve participants took part in our study, in which they had to search and select 2D targets with each of the evaluated techniques. Our fusion-based hybrid interaction technique was found to be more reliable than the previous gaze and BCI hybrid interaction techniques for 10 participants over 12, while being 29% faster on average. However, similarly to what has been observed in hybrid gaze-and-speech interaction, gaze-only interaction technique still provides the best performance. Our results should encourage the use of input fusion, as opposed to sequential interaction, in order to design better hybrid interfaces. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
182. A freeze-object interaction technique for handheld augmented reality systems.
- Author
-
Arshad, Haslina, Chowdhury, Shahan, Chun, Lam, Parhizkar, Behrang, and Obeidy, Waqas
- Subjects
AUGMENTED reality ,GOOGLE Glass ,REAL-time computing ,INTERACTION design (Human-computer interaction) ,INTERACTIVE computer systems - Abstract
This paper will present an improved freeze interaction technique in handheld augmented reality. A freeze interaction technique allows users to freeze the augmented view and interact with the virtual content while the camera image is still. In the past, the strength of the freeze interaction technique was that it was able to overcome a shaky view by enabling users to experience a comfortable interaction. However, it froze the whole augmented scene. When a virtual object is updating continuously, the real-world view from the camera remains as a still picture until the user unfreezes the scene, thus reducing the real-time augmented reality experience which, to the user, is not attractive enough. To overcome the current problem, a 'Freeze-Object' interaction technique has been implemented for handheld augmented reality. The Freeze-Object interaction technique allows the user to interact with a frozen virtual object in a live real-world scene. A comparative user study was conducted to evaluate the 'Freeze-Object' interaction technique in a handheld touch AR environment. The Freeze-Object interaction technique was compared with the existing freeze technique in terms of user performance and user preference of the interaction technique. Users were asked to perform three basic manipulation tasks (translation, rotation and scaling) using both interaction techniques. The results indicated that there was a significant difference between both techniques with regard to the translation task and that for the overall tasks, the users preferred the Freeze-Object interaction over the existing freeze interaction technique because the former technique allows users to see live the real-world view with a frozen virtual object. The improved freeze interaction technique can be used for various applications, such as interior design and maintenance. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
183. Integrating Computer Graphics and Computer Vision for Industrial Applications
- Author
-
Encarnação, José L., Groß, Markus, Hofmann, G. Rainer, Hübner, Wolfgang, and Kunii, Tosiyasu L., editor
- Published
- 1992
- Full Text
- View/download PDF
184. An Object-Oriented Framework for Direct-Manipulation User Interfaces
- Author
-
Shan, Yen-Ping, Hewitt, W. T., editor, Gnatz, R., editor, Duce, D. A., editor, Blake, Edwin H., editor, and Wisskirchen, Peter, editor
- Published
- 1991
- Full Text
- View/download PDF
185. Direct Manipulation Techniques for the Human-Computer Interface
- Author
-
Ziegler, Jürgen E., Hewitt, W. T., editor, Gnatz, R., editor, Duce, D. A., editor, Garcia, Gérald, editor, and Herman, Ivan, editor
- Published
- 1991
- Full Text
- View/download PDF
186. The OO-AGES Model — An Overview
- Author
-
Gomes, Mario Rui, Fernandes, João Carlos Lourenço, Hewitt, W. T., editor, Gnatz, R., editor, Duce, D. A., editor, Duce, David A., editor, Gomes, M. Rui, editor, Hopgood, F. Robert A., editor, and Lee, John R., editor
- Published
- 1991
- Full Text
- View/download PDF
187. Exploration of Techniques for Rapid Activation of Glanceable Information in Head-Worn Augmented Reality
- Author
-
Feiyu Lu, Doug A. Bowman, and Shakiba Davari
- Subjects
business.industry ,Human–computer interaction ,Computer science ,Usability ,Augmented reality ,Context (language use) ,Overlay ,Interaction technique ,business ,Social acceptance - Abstract
Future augmented reality (AR) glasses may provide pervasive and continuous access to everyday information. However, it remains unclear how to address the issue of virtual information overlaying and occluding real-world objects and information that are of interest to users. One approach is to keep virtual information sources inactive until they are explicitly requested, so that the real world remains visible. In this research, we explored the design of interaction techniques with which users can activate virtual information sources in AR. We studied this issue in the context of Glanceable AR, in which virtual information resides at the periphery of the user’s view. We proposed five techniques and evaluated them in both sitting and walking scenarios. Our results demonstrate the usability, user preference, and social acceptance of each technique, as well as design recommendations to achieve optimal performance. Our findings can inform the design of lightweight techniques to activate virtual information displays in future everyday AR interfaces.
- Published
- 2021
- Full Text
- View/download PDF
188. EyeMU Interactions: Gaze + IMU Gestures on Mobile Devices
- Author
-
Karan Ahuja, Andy Kong, Chris Harrison, and Mayank Goel
- Subjects
Computer science ,Inertial measurement unit ,Human–computer interaction ,Gesture recognition ,Menu bar ,Eye tracking ,Interaction technique ,Mobile device ,Gaze ,Gesture - Abstract
As smartphone screens have grown in size, single-handed use has become more cumbersome. Interactive targets that are easily seen can be hard to reach, particularly notifications and upper menu bar items. Users must either adjust their grip to reach distant targets, or use their other hand. In this research, we show how gaze estimation using a phone’s user-facing camera can be paired with IMU-tracked motion gestures to enable a new, intuitive, and rapid interaction technique on handheld phones. We describe our proof-of-concept implementation and gesture set, built on state-of-the-art techniques and capable of self-contained execution on a smartphone. In our user study, we found a mean euclidean gaze error of 1.7 cm and a seven-class motion gesture classification accuracy of 97.3%.
- Published
- 2021
- Full Text
- View/download PDF
189. Ubiquitous Interactions for Heads-Up Computing: Understanding Users’ Preferences for Subtle Interaction Techniques in Everyday Settings
- Author
-
Shengdong Zhao, Shardul Sapkota, and Ashwin Ram
- Subjects
body regions ,Human–computer interaction ,Computer science ,Everyday tasks ,Wearable computer ,Information needs ,Interaction technique - Abstract
In order to satisfy users’ information needs while incurring minimum interference to their ongoing activities, previous studies have proposed using Optical Head-mounted Displays (OHMDs) with different input techniques. However, it is unclear how these techniques compare against one another in terms of being comfortable and non-intrusive to a user’s everyday tasks. Through a wizard-of-oz study, we thus compared four subtle interaction techniques (feet, arms, thumb-index-fingers, and teeth) in three daily hands-busy tasks under different settings (giving a presentation–sitting, carrying bags–walking, and folding clothes–standing). We found that while each interaction technique has its niche, thumb-index-finger interaction has the best overall balance and is most preferred as a cross-scenario subtle interaction technique for smart glasses. We provide further evaluation of thumb-index-finger interaction with an in-the-wild study with 8 users. Our results contribute to an enhanced understanding of user preferences for subtle interaction techniques with smart glasses for everyday use.
- Published
- 2021
- Full Text
- View/download PDF
190. User Burden of Microinteractions: An In-lab Experiment Examining User Performance and Perceived Burden Related to In-situ Self-reporting
- Author
-
Bingjian Huang, Sun Young Park, Mark W. Newman, Yuxuan Li, and Xinghui Yan
- Subjects
Smartwatch ,Computer science ,Human–computer interaction ,Perception ,media_common.quotation_subject ,Context (language use) ,Interaction technique ,media_common - Abstract
In-situ self-reporting on smartwatches has been widely-used to collect data about participants’ experience. Yet it places a burden on participants by requiring immediate response in context. Such user burden can be studied through users’ performance and perceptions. This paper evaluates six interaction techniques to study user burden of in-situ self-reporting from both performance and perception aspects under three simulated scenarios (i.e., gaming, social chatting, and walking) through an in-lab experiment with twenty-four participants. Findings show that user performance was not always aligned with their perceived burden owing to factors such as users’ acceptable interaction technique features for self-reporting, the context of use, and users’ acquired skills over time. This study also sheds lights onto how user burden is experienced across different stages of a self-reporting microinteraction. Reflecting on findings, we discuss the implications for understanding and minimizing user burden related to in-situ self-reporting and as well as other types of microinteractions.
- Published
- 2021
- Full Text
- View/download PDF
191. History and Basic Components of CAD
- Author
-
Encarnação, José L., Lindner, Rolf, Schlechtendahl, Ernst G., Encarnação, José L., editor, Bø, K., editor, Foley, J. D., editor, Guedj, R. A., editor, ten Hagen, P. J. W., editor, Hopgood, F. R. A., editor, Hosaka, M., editor, Lucas, M., editor, Requicha, A. G., editor, Lindner, Rolf, and Schlechtendahl, Ernst G.
- Published
- 1990
- Full Text
- View/download PDF
192. CamCutter: Impromptu Vision-Based Cross-Device Application Sharing
- Author
-
Kazuki Takashima, Takuma Hagiwara, Yoshifumi Kitamura, and Morten Fjeld
- Subjects
Computer science ,media_common.quotation_subject ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Interaction technique ,Impromptu ,Task (project management) ,Human-Computer Interaction ,Human–computer interaction ,Application sharing ,Reading (process) ,0202 electrical engineering, electronic engineering, information engineering ,Systems architecture ,0501 psychology and cognitive sciences ,Mobile device ,Mobile interaction ,050107 human factors ,Software ,media_common - Abstract
As the range of handheld, mobile and desktop devices expands and worldwide demand for collaborative application tools increases, there is a growing need for higher speed impromptu cross-device application sharing to keep up with workplace requirements for on-site or remote collaborations. To address this, we have developed CamCutter, a cross-device interaction technique enabling a user to quickly select and share an application running on another screen using the camera of a handheld device. This technique can accurately identify the targeted application on a display using our adapted computer vision algorithm, system architecture and software implementation, allowing impromptu real-time and synchronized application sharing between devices. For desktop and meeting room set-ups, we performed a technical evaluation, measuring accuracy and speed of migration. For a single-user reading task and a collaborative composition task, we carried out a user study comparing our technique with commercial screen sharing applications. The results of this study showed both higher performance and preference for our system. Finally, we discuss CamCutter’s limitations and present insights for future vision-based cross-device application sharing.
- Published
- 2019
- Full Text
- View/download PDF
193. THE-3DI: Tracing head and eyes for 3D interactions
- Author
-
Sehat Ullah and Muhammad Raees
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,OpenGL ,020207 software engineering ,Usability ,02 engineering and technology ,Interaction technique ,Tracing ,Virtual reality ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Computer vision ,Artificial intelligence ,business ,Software ,Cognitive load ,Gesture - Abstract
Gesture-based interfaces offer a suitable platform for interactions in Virtual Environments (VE). However, the difficulties involved in learning and making of distinct gestures affect the performance of an interactive system. By incorporating computer vision in Virtual Reality (VR), this paper presents an intuitive interaction technique where the states and positions of eyes are traced for interaction. With comparatively low cognitive load, the technique offers an easy to use interaction solution for VR applications. Unlike other gestural interfaces, interactions are performed in distinct phases where transition from one phase to another is enacted with simple blink of eyes. In an attained phase, interaction along an arbitrary axis is performed by a perceptive gesture of head; rolling, pitching or yawing. To follow the trajectory of eyes in real time, coordinates mapping is performed dynamically. The proposed technique is implemented in a case-study project; EBI (Eyes Blinking based Interaction). In the EBI project, real time detection and tracking of eyes are performed at the back-end. At the front-end, virtual scene is rendered accordingly by using the OpenGL library. To assess accuracy, usability and cognitive load of the proposed technique, the EBI project is evaluated 280 times in three different evaluation sessions. With an ordinary camera, an average accuracy of 81.4% is achieved. However, assessment made by using a high-quality camera revealed that accuracy of the system could be raised to a higher level. As a whole, findings of the evaluations support applicability of the technique in the emerging domains of VR.
- Published
- 2019
- Full Text
- View/download PDF
194. ProxiTalk
- Author
-
Fengshi Zheng, Yang Zhican, Yuanchun Shi, and Chun Yu
- Subjects
Direct voice input ,Computer Networks and Communications ,Computer science ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Interaction technique ,Human-Computer Interaction ,Activity recognition ,Empirical research ,Hardware and Architecture ,Inertial measurement unit ,Phone ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Mobile interaction ,050107 human factors ,Gesture - Abstract
Speech input, such as voice assistant and voice message, is an attractive interaction option for mobile users today. However, despite its popularity, there is a use limitation for smartphone speech input: users need to press a button or say a wake word to activate it before use, which is not very convenient. To address it, we match the motion that brings the phone to mouth with the user's intention to use voice input. In this paper, we present ProxiTalk, an interaction technique that allows users to enable smartphone speech input by simply moving it close to their mouths. We study how users use ProxiTalk and systematically investigate the recognition abilities of various data sources (e.g., using a front camera to detect facial features, using two microphones to estimate the distance between phone and mouth). Results show that it is feasible to utilize the smartphone's built-in sensors and instruments to detect ProxiTalk use and classify gestures. An evaluation study shows that users can quickly acquire ProxiTalk and are willing to use it. In conclusion, our work provides the empirical support that ProxiTalk is a practical and promising option to enable smartphone speech input, which coexists with current trigger mechanisms.
- Published
- 2019
- Full Text
- View/download PDF
195. Battle of minds: a new interaction approach in BCI games through competitive reinforcement
- Author
-
Yoones A. Sekhavat
- Subjects
Game mechanics ,Battle ,Computer Networks and Communications ,Computer science ,media_common.quotation_subject ,ComputingMilieux_PERSONALCOMPUTING ,020207 software engineering ,02 engineering and technology ,Interaction technique ,Mode (computer interface) ,Hardware and Architecture ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Reinforcement ,Software ,Brain–computer interface ,media_common - Abstract
Brainwaves can be used as auxiliary inputs in the design of game mechanics for attention training games. Although attention training games are generally developed in the single-player mode, multi-player games can be more engaging and fun to play. This paper proposes a new interaction technique between players in multi-player car racing games, in which a novel competitive reinforcement approach is used to regulate difficulty parameters using brainwaves of players. The results of our experiments suggest that the competitive reinforcement is more effective to increase attention level as compared to positive and negative reinforcement conditions.
- Published
- 2019
- Full Text
- View/download PDF
196. Microscale thermophoresis: warming up to a new biomolecular interaction technique
- Author
-
Vanessa Meier-Stephenson, Tyler Mrozowich, and Trushar R. Patel
- Subjects
Chemistry ,Microscale thermophoresis ,Nanotechnology ,Interaction technique ,Warming up ,General Biochemistry, Genetics and Molecular Biology - Abstract
Biomolecules, such as RNA, DNA, proteins and polysaccharides, are at the heart of fundamental cellular processes. These molecules differ greatly with each other in terms of their structures and functions. However, in the midst of the diversity of biomolecules is the unifying feature that they interact with each other to execute a viable biological system. Interactions of biomolecules are critical for cells to survive and replicate, for food metabolism to produce energy, for antibiotics and vaccines to function, for spreading of diseases and for every other biological process. An improved understanding of these interactions is crucial for studying how cells and organs function, to appreciate how diseases are caused and how infections occur, with infinite implications in medicine and therapy. Many biochemical and biophysical techniques are currently being employed to study biomolecular interactions. Microscale thermophoresis (MST) is a relatively new biophysical technique that can provide powerful insight into the interactions of biomolecules and is quickly being adopted by an increasing number of researchers worldwide. This article provides a brief description of principles underpinning the MST process, in addition to benefits and limitations.
- Published
- 2019
- Full Text
- View/download PDF
197. Hur pratar du med din klient? : En studie om interaktionsval mellan revisorn och klienten
- Author
-
Johansson, Emil, Landén, Hanna, Johansson, Emil, and Landén, Hanna
- Abstract
Examensarbete, Civilekonomprogrammet, Ekonomihögskolan vid Linnéuniversitetet, VT 2021 Författare: Emil Johansson och Hanna Landén Handledare: Sven-Olof Yrjö Collin Examinator: Andreas Jansson Titel: Hur pratar du med din klient? - En studie om hur revisorn interagerar med sin klient Bakgrund: Coronapandemins inträde i samhället har inneburit att människor förväntas dra ner på sina fysiska kontakter med andra människor. Eftersom revisionsbranschen å ena sidan kommit långt i digitaliseringen är det möjligt att den inte påverkas i större utsträckning, men å andra sidan är fysiska kontakter viktiga för att upprätthålla en god relation med klienten. Hur interaktionen sker kan påverka relationen som i sin tur kan påverka revisionskvalitén. Studien undersöker därför dessa interaktioner genom att studera val av interaktionsteknik mellan revisor och klient. Syfte: Uppsatsens syfte är att kartlägga och förklara hur revisorn interagerar med klienten i revisionens olika delar. Metod: Utifrån en teoretisk referensram sammanställs och kategoriseras de interaktionstekniker som används mellan revisor och klient i en typologi. Vidare genomförs en flermetodsforskning som inleds med en kvalitativ förstudie där dennatypologi prövas och som induktivt fångar sådant tidigare studier inte utforskat. Den teoretiska referensramen och förstudien ligger sedan till grund för den kvantitativa huvudstudiens utformning som har för avsikt att beskriva användandet av interaktionsteknikerna i revisionens olika delar. Slutsats: Studien kartlägger och förklarar varför en interaktionsteknik väljs i respektive del av revisionen genom att i en typologi och empirisk studie beskriva interaktionsteknikerna. Studien finner tre faktorer som påverkar valet av teknik; grad av digitalisering, teknikens tidsåtgång samt hur avancerad tekniken är. Studien påvisar även substitutionseffekter mellan några av interaktionsteknikerna. Denna kartläggning av interaktionstekniker möjliggör också framtida forskning på ämnet., Master Thesis in Business Administration, School of Business and Economics at Linnaeus University, VT 2021 Authors: Emil Johansson and Hanna Landén Supervisor: Sven-Olof Yrjö Collin Examiner: Andreas Jansson Title: How do you approach your client? - A study of how the auditor interacts with their client Background: The coronavirus pandemic’s entry into society has meant that people are expected to reduce their physical contacts with other people. As the auditing industry has come a long way in digitalization, it is possible that it is not affected to a greater extent. But on the other hand, physical contacts are important for maintaining a good relationship with the client. How the interaction takes place can affect the relationship, which in turn can affect the audit quality. The study therefore examines these interactions by studying the choice of interaction technique between auditor and client. Purpose: The aim of the study is to map and explain how the auditor interacts with the client in the various parts of the audit. Method: Based on a theoretical framework, the interaction techniques used between auditor and client are compiled and categorized in a typology. Furthermore, a multi-method research is carried out which begins with a qualitative pilot study where the typology is tested. The pilot study also inductively captures what previous studies have not explored. The theoretical framework and the pilot study then form the basis for the design of the quantitative main study, which intends to explain the choice of interaction technique in the various parts of the audit. Conclusion: The study maps and explains why an interaction technique is chosen in each part of the audit in a typology and empirical study by describing the interaction techniques. The study finds three factors that influence the choice of technique; level of digitalization, the time required and how advanced the technique is. The study also discovers substitution effects between some of the i
- Published
- 2021
198. Deep Learning based Lip Movement Technique for Mute
- Author
-
K. Neeraja, G. Praneeth, and K. Srinivas Rao
- Subjects
Feature (computer vision) ,Computer science ,business.industry ,Deep learning ,Key (cryptography) ,Computer vision ,Image processing ,Interaction technique ,Artificial intelligence ,State (computer science) ,Virtual reality ,business ,Visualization - Abstract
There is a significant change in technology, virtual reality (VR) has been used in this article as the simplest way of visual activity and interaction technique which provides visual nodal options supported by the automated lip-reading technology. With this advancement in the technology, the state of the human can be identified and captured through the lip movements. Deep Learning is used to analyze the real time thinking of the human visual choices. Using image processing technique, virtual reality technology is used to identify the driver's visual features and evaluate the critical time thinking. The ancient lip-reading recognition system, the need for responsive applications is difficult to fulfill. Deep Learning is now the emerging technique of artificial intelligence which acts as a normal human brain with the thinking capability. It consists of different layers which are used to evaluate the details like neurons in brain. In this article, the surface area of the lip is taken as the key element or the key feature of the lip movement. The horizontal distance and the vertical distance of the lips are used to calculate the surface area. This surface area is then used to estimate some parameter and store in the database. Based on the results, the accuracy is more than 85%. Therefore, our proposed system is more effective and more efficient than other traditional methods.
- Published
- 2021
- Full Text
- View/download PDF
199. Production of Mobile English Language Teaching Application Based on Text Interface Using Deep Learning
- Author
-
Yunsik Cho and Jinmo Kim
- Subjects
text interface ,TK7800-8360 ,Computer Networks and Communications ,Computer science ,Interface (Java) ,interaction ,02 engineering and technology ,mobile ,Convolutional neural network ,English language teaching application ,Handwriting ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Input method ,Electrical and Electronic Engineering ,Graphical user interface ,business.industry ,Deep learning ,05 social sciences ,050301 education ,deep learning ,020207 software engineering ,Interaction technique ,augmented reality ,Hardware and Architecture ,Control and Systems Engineering ,Signal Processing ,Augmented reality ,Artificial intelligence ,Electronics ,business ,0503 education - Abstract
This paper proposes a novel text interface using deep learning in a mobile platform environment and presents the English language teaching applications created based on our interface. First, an interface for handwriting texts is designed with a simple structure based on a touch-based input method of mobile platform applications. This input method is easier and more convenient than the existing graphical user interface (GUI), in which menu items such as buttons are selected repeatedly or step by step. Next, an interaction that intuitively facilitates a behavior and decision making from the input text is proposed. We propose an interaction technique that recognizes a text handwritten on the text interface through the Extended Modified National Institute of Standards and Technology (EMNIST) dataset and a convolutional neural network (CNN) model and connects the text to a behavior. Finally, using the proposed interface, we create English language teaching applications that can effectively facilitate learning alphabet writing and words using handwriting. Then, the satisfaction regarding the interface during the educational process is analyzed and verified through a survey experiment with users.
- Published
- 2021
200. VXSlate: Exploring Combination of Head Movements and Mobile Touch for Large Virtual Display Interaction
- Author
-
Morten Fjeld, Karol Chlasta, Khanh-Duy Le, Andreas Kunz, Tanh Quang Tran, and Krzysztof Krejtz
- Subjects
Computer science ,Headset ,05 social sciences ,Virtual representation ,020207 software engineering ,02 engineering and technology ,Interaction technique ,Virtual reality ,Translation (geometry) ,Task (computing) ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Rotation (mathematics) ,Mobile device ,050107 human factors - Abstract
Virtual Reality (VR) headsets can open opportunities for users to accomplish complex tasks on large virtual displays using compact and portable devices. However, interacting with such large virtual displays using existing interaction techniques might cause fatigue, especially for precise manipulation tasks, due to the lack of physical surfaces. To deal with this issue, we explored the design of VXSlate, an interaction technique that uses a large virtual display as an expansion of a tablet. We combined a user’s head movements as tracked by the VR headset, and touch interaction on the tablet. Using VXSlate, a user head movements positions a virtual representation of the tablet together with the user’s hand, on the large virtual display. This allows the user to perform fine-tuned multi-touch content manipulations. In a user study with seventeen participants, we investigated the effects of VXSlate on users in problem-solving tasks involving content manipulations at different levels of difficulty, such as translation, rotation, scaling, and sketching. As a baseline for comparison, off-the-shelf touch-controller interactions were used. Overall, VXSlate allowed participants to complete the task with completion times and accuracy that are comparable to touch-controller interactions. After an interval of use, VXSlate significantly reduced users’ time to perform scaling tasks in content manipulations, as well as reducing perceived effort. We reflected on the advantages and disadvantages of VXSlate in content manipulation on large virtual displays and explored further applications within the VXSlate design space.
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.