3,321 results on '"Vanderdonckt, Jean"'
Search Results
152. Towards Uniformed Task Models in a Model-Based Approach
- Author
-
Limbourg, Quentin, Pribeanu, Costin, Vanderdonckt, Jean, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, and Johnson, Chris, editor
- Published
- 2001
- Full Text
- View/download PDF
153. Task Modelling for Context-Sensitive User Interfaces
- Author
-
Pribeanu, Costin, Limbourg, Quentin, Vanderdonckt, Jean, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, and Johnson, Chris, editor
- Published
- 2001
- Full Text
- View/download PDF
154. The Task-Dialog and Task-Presentation Mapping Problem: Some Preliminary Results
- Author
-
Limbourg, Quentin, Vanderdonckt, Jean, Souchon, Nathalie, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Palanque, Philippe, editor, and Paternò, Fabio, editor
- Published
- 2001
- Full Text
- View/download PDF
155. Hand Gesture Recognition for an Off-the-Shelf Radar by Electromagnetic Modeling and Inversion
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, UCL - SST/ICTM - Institute of Information and Communication Technologies, Electronics and Applied Mathematics, UCL - SST/ELI - Earth and Life Institute, Sluÿters, Arthur, Lambot, Sébastien, Vanderdonckt, Jean, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, UCL - SST/ICTM - Institute of Information and Communication Technologies, Electronics and Applied Mathematics, UCL - SST/ELI - Earth and Life Institute, Sluÿters, Arthur, Lambot, Sébastien, and Vanderdonckt, Jean
- Abstract
Microwave radar sensors in human-computer interactions have several advantages compared to wearable and image-based sensors, such as privacy preservation, high reliability regardless of the ambient and lighting conditions, and larger field of view. However, the raw signals produced by such radars are high-dimension and relatively complex to interpret. Advanced data processing, including machine learning techniques, is therefore necessary for gesture recognition. While these approaches can reach high gesture recognition accuracy, using artificial neural networks requires a significant amount of gesture templates for training and calibration is radar-specific. To address these challenges, we present a novel data processing pipeline for hand gesture recognition that combines advanced full-wave electromagnetic modelling and inversion with machine learning. In particular, the physical model accounts for the radar source, radar antennas, radar-target interactions and target itself, i.e., the hand in our case. To make this processing feasible, the hand is emulated by an equivalent infinite planar reflector, for which analytical Green’s functions exist. The apparent dielectric permittivity, which depends on the hand size, electric properties, and orientation, determines the wave reflection amplitude based on the distance from the hand to the radar. Through full-wave inversion of the radar data, the physical distance as well as this apparent permittivity are retrieved, thereby reducing by several orders of magnitude the dimension of the radar dataset, while keeping the essential information. Finally, the estimated distance and apparent permittivity as a function of gesture time are used to train the machine learning algorithm for gesture recognition. This physically-based dimension reduction enables the use of simple gesture recognition algorithms, such as template-matching recognizers, that can be trained in real time and provide competitive accuracy with only a few samples
- Published
- 2022
156. Exploration of the Impact of 3D Gestural Interaction on the Customer Experience
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Sellier, Quentin, Poncin, Ingrid, Vanderdonckt, Jean, 28th International Conference on Recent Advances in Retailing and Consumer Science, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Sellier, Quentin, Poncin, Ingrid, Vanderdonckt, Jean, and 28th International Conference on Recent Advances in Retailing and Consumer Science
- Published
- 2022
157. µV: An Articulation, Rotation, Scaling, and Translation Invariant (ARST) Multi-stroke Gesture Recognizer
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Magrofuoco, Nathan, Roselli, Paolo, Vanderdonckt, Jean, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Magrofuoco, Nathan, Roselli, Paolo, and Vanderdonckt, Jean
- Abstract
Finger-based gesture input becomes a major interaction modality for surface computing. Due to the low precision of the finger and the variation in gesture production, multistroke gestures are still challenging to recognize in various setups. In this paper, we present µV, a multistroke gesture recognizer that addresses the properties of articulation, rotation, scaling, and translation invariance by combining $P+’s cloud-matching for articulation invariance with !FTL’s local shape distance for RST-invariance. We evaluate µV against five competitive recognizers on MMG, an existing gesture set, and on two new versions for smartphones and tablets, MMG+ and RMMG+, a randomly rotated version on both platforms. µV is significantly more accurate than its predecessors when rotation invariance is required and not significantly inferior when it is not. µV is also significantly faster than others with many samples and not significantly slower with few samples.
- Published
- 2022
158. UsyBus: A Communication Framework among Reusable Agents integrating Eye-Tracking in Interactive Applications
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Jambon, Francis, Vanderdonckt, Jean, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Jambon, Francis, and Vanderdonckt, Jean
- Abstract
Eye movement analysis is a popular method to evaluate whether a user interface meets the users’ requirements and abilities. However, with current tools, setting up a usability evaluation with an eye-tracker is resource-consuming, since the areas of interest are defined manually, exhaustively and redefined each time the user interface changes. This process is also error-prone, since eye movement data must be finely synchronized with user interface changes. These issues become more serious when the user interface layout changes dynamically in response to user actions. In addition, current tools do not allow easy integration into interactive applications, and opportunistic code must be written to link these tools to user interfaces. To address these shortcomings and to leverage the capabilities of eye-tracking, we present UsyBus, a communication framework for autonomous, tight coupling among reusable agents. These agents are responsible for collecting data from eye-trackers, analyzing eye movements, and managing communication with other modules of an interactive application. UsyBus allows multiple heterogeneous eye-trackers as input, provides multiple configurable outputs depending on the data to be exploited. Modules exchange data based on the UsyBus communication framework, thus creating a customizable multi-agent architecture. UsyBus application domains range from usability evaluation to gaze interaction applications design. Two case studies, composed of reusable modules from our portfolio, exemplify the implementation of the UsyBus framework.
- Published
- 2022
159. Engineering the Transition of Interactive Collaborative Software from Cloud Computing to Edge Computing
- Author
-
UCL - SST/ICTM/INGI - Pôle en ingénierie informatique, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Ortegat, Guillaume, Grolaux, Donatien, Riviere, Etienne, Vanderdonckt, Jean, UCL - SST/ICTM/INGI - Pôle en ingénierie informatique, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Ortegat, Guillaume, Grolaux, Donatien, Riviere, Etienne, and Vanderdonckt, Jean
- Abstract
The “Software as a Service” (SaaS) model of cloud computing popularized online multiuser collaborative software. Two famous examples of this class of software are Office 365 from Microsoft and Google Workspace. Cloud technology removes the need to install and update the software on end users’ computers and provides the necessary underlying infrastructure for online collaboration. However, to provide a good end-user experience, cloud services require an infrastructure able to scale up to the task and allow low-latency interactions with a variety of users worldwide. This is a limiting factor for actors that do not possess such infrastructure. Unlike cloud computing which forgets the computational and interactional capabilities of end users’ devices, the edge computing paradigm promises to exploit them as much as possible. To investigate the potential of edge computing over cloud computing, this paper presents a method for engineering interactive collaborative software supported by edge devices for the replacement of cloud computing resources. Our method is able to handle user interface aspects such as connection, execution, migration, and disconnection differently depending on the available technology. We exemplify our approach by developing a distributed Pictionary game deployed in two scenarios: a nonshared scenario where each participant interacts only with their own device and a shared scenario where participants also share a common device, including a TV. After a theoretical comparative study of edge vs. cloud computing, an experiment compares the two implementations to determine their effect on the end user’s perceived experience and latency vs. real latency.
- Published
- 2022
160. Informing Future Gesture Elicitation Studies for Interactive Applications that Use Radar Sensing
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Villarreal Narvaez, Santiago, Şiean, Alexandru-Ionuţ, Sluÿters, Arthur, Vatavu, Radu-Daniel, Vanderdonckt, Jean, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Villarreal Narvaez, Santiago, Şiean, Alexandru-Ionuţ, Sluÿters, Arthur, Vatavu, Radu-Daniel, and Vanderdonckt, Jean
- Abstract
We show how two recently introduced visual tools, RepliGES and GEStory, can be used conjointly to inform possible replications of Gesture Elicitation Studies (GES) with a case study centered on gestures that can be sensed with radars. Starting from a GES identified in GEStory, we employ the dimensions of the RepliGES space to enumerate eight possible ways to replicate that study towards gaining new insights into end user’s preferences for gesture-based interaction for applications that use radar sensors
- Published
- 2022
161. QuantumLeap, a Framework for Engineering Gestural User Interfaces based on the Leap Motion Controller
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, UCL - SST/ICTM/INGI - Pôle en ingénierie informatique, Sluÿters, Arthur, Ousmer, Mehdi, Roselli, Paolo, Vanderdonckt, Jean, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, UCL - SST/ICTM/INGI - Pôle en ingénierie informatique, Sluÿters, Arthur, Ousmer, Mehdi, Roselli, Paolo, and Vanderdonckt, Jean
- Abstract
Despite the tremendous progress made for recognizing gestures acquired by various devices, such as the Leap Motion Controller, developing a gestural user interface based on such devices still induces a significant programming and software engineering effort before obtaining a running interactive application. To facilitate this development, we present QuantumLeap, a framework for engineering gestural user interfaces based on the Leap Motion Controller. Its pipeline software architecture can be parameterized to define a workflow among modules for acquiring gestures from the Leap Motion Controller, for segmenting them, recognizing them, and managing their mapping to functions of the application. To demonstrate its practical usage, we implement two gesture-based applications: an image viewer that allows healthcare workers to browse DICOM medical images of their patients without any hygiene issues commonly associated with touch user interfaces and a large-scale application for managing multimedia contents on wall screens. To evaluate the usability of QuantumLeap, seven participants took part in an experiment in which they used QuantumLeap to add a gestural interface to an existing application.
- Published
- 2022
162. RepliGES and GEStory: Visual Tools for Systematizing and Consolidating Knowledge on User-Defined Gestures
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Gheran, Bogdan-Florin, Villarreal Narvaez, Santiago, Vatavu, Radu-Daniel, Vanderdonckt, Jean, International Conference on Advanced Visual Interfaces (AVI 2022), UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Gheran, Bogdan-Florin, Villarreal Narvaez, Santiago, Vatavu, Radu-Daniel, Vanderdonckt, Jean, and International Conference on Advanced Visual Interfaces (AVI 2022)
- Abstract
The body of knowledge accumulated by gesture elicitation studies (GES), although useful, large, and extensive, is also heterogeneous, scattered in the scientific literature across different venues and fields of research, and difficult to generalize to other contexts of use represented by different gesture types, sensing devices, applications, and user categories. To address such aspects, we introduce RepliGES, a conceptual space that supports (1) replications of gesture elicitation studies to confirm, extend, and complete previous findings, (2) reuse of previously elicited gesture sets to enable new discoveries, and (3) extension and generalization of previous findings with new methods of analysis and for new user populations towards consolidated knowledge of user-defined gestures. Based on RepliGES, we introduce GEStory, an interactive design space and visual tool, to structure, visualize and identify user-defined gestures from a number of 216 published gesture elicitation studies.
- Published
- 2022
163. (Semi-)Automatic Computation of User Interface Consistency
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Burny, Nicolas, Vanderdonckt, Jean, ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS '22), UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Burny, Nicolas, Vanderdonckt, Jean, and ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS '22)
- Abstract
Many measures exist to (semi-)automatically compute the quality of a graphical user interface, such as aesthetic metrics, visual metrics, and performance metrics. These measures are mostly individual as they apply to a single graphical user interface at a time. Unlike these measures, consistency requires evaluating a number of screens within the same application (intra-application consistency) or across applications (inter-application consistency). This paper presents a formula, a method, and supporting software for computing this consistency and its counterpart, inconsistency, either completely automatically when the interface segmentation is performed by the software or semi-automatically when the interface segmentation is performed manually by the end-user.
- Published
- 2022
164. Gestural-Vocal Coordinated Interaction on Large Displays
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, UCL - SST/ICTM/INGI - Pôle en ingénierie informatique, Parthiban, Vik, Maes, Pattie, Sellier, Quentin, Sluÿters, Arthur, Vanderdonckt, Jean, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, UCL - SST/ICTM/INGI - Pôle en ingénierie informatique, Parthiban, Vik, Maes, Pattie, Sellier, Quentin, Sluÿters, Arthur, and Vanderdonckt, Jean
- Abstract
On large displays, using keyboard and mouse input is challenging because small mouse movements do not scale well with the size of the display and individual elements on screen. We present “Large User Interface” (LUI), which coordinates gestural and vocal interaction to increase the range of dynamic surface area of interactions possible on large displays. The interface leverages real-time continuous feedback of free-handed gestures and voice to control a set of applications such as: photos, videos, 3D models, maps, and a gesture keyboard. Utilizing a single stereo camera and voice assistant, LUI does not require calibration or many sensors to operate, and it can be easily installed and deployed. We report results from user studies where participants found LUI efficient, learnable with minimal instruction, and preferred it to point-and-click interfaces.
- Published
- 2022
165. Using Gestural Interaction Technology to Improve the Consumer Experience: An Abstract
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Sellier, Quentin, Poncin, Ingrid, Vanderdonckt, Jean, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Sellier, Quentin, Poncin, Ingrid, and Vanderdonckt, Jean
- Abstract
Nowadays, creating a positive experience is a key source of competitive advantage (Lemon & Verhoef, 2016). A good experience makes a person five times more likely to recommend a company and more likely to purchase in the future (Yohn, 2019). Besides, gesture interaction technology appears as a promising way to provide individuals a global richer experience than with classical user interfaces (Daugherty et al., 2015). However, we currently ignore the concrete impact of this type of interaction on the consumer experience, whereas it is necessary for an efficient adoption of such interfaces. In this work, we try to get a better understanding of the role of the gestural interaction on the consumer experience, through qualitative and quantitative approaches.
- Published
- 2022
166. Enhancing playful customer experience with personalization
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Lambillotte, Laetitia, Magrofuoco, Nathan, Poncin, Ingrid, Vanderdonckt, Jean, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Lambillotte, Laetitia, Magrofuoco, Nathan, Poncin, Ingrid, and Vanderdonckt, Jean
- Abstract
Retailers develop personalized websites with the aim of improving customer experience. However, we still have limited knowledge about the effect of personalization on customer experience and the underlying processes. With a lab experiment, this research specifically examines the effect of actual personalization and perceived personalization on playful customer experience using both subjective and objective measures, with the support of eyetracking techniques. We show that personalization, regardless of whether it is perceived or not, enhance the playful customer experience of a retailing website. In addition, we highlight the presence of two concomitant processes. Content needs to be perceived as personalized to influence the subjective playful customer experience, but actual personalization does influence objective playful customer experience. Although customers spend the same time on the website, they focus more of their attention on their favorite products when content is personalized. Such focused attention leads them to select their favorite products for purchase.
- Published
- 2022
167. SnappView, A Software Development Kit for Supporting End-user Mobile Interface Review
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, de Ryckel, Xavier, Sluÿters, Arthur, Vanderdonckt, Jean, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, de Ryckel, Xavier, Sluÿters, Arthur, and Vanderdonckt, Jean
- Abstract
This paper presents SnappView, an open-source software development kit that facilitates end-user review of graphical user interfaces for mobile applications and streamlines their input into a continuous design life cycle. \sv structures this user interface review process into four cumulative stages: (1) a developer creates a mobile application project with user interface code instrumented by only a few instructions governing \sv and deploys the resulting application on an application store; (2) any tester, such as an end-user, a designer, a reviewer, while interacting with the instrumented user interface, shakes the mobile device to freeze and capture its screen and to provide insightful multimodal feedback such as textual comments, critics, suggestions, drawings by stroke gestures, voice or video records, with a level of importance; (3) the screenshot is captured with the application, browser, and status data and sent with the feedback to SnappView server; and (4) a designer then reviews collected and aggregated feedback data and passes them to the developer to address raised usability problems. Another cycle then initiates an iterative design. This paper presents the motivations and process for performing mobile application review based on SnappView. Based on this process, we deployed on the AppStore ``WeTwo'', a real-world mobile application to find various personal activities over a one-month period with 420 active users. This application served for a user experience evaluation conducted with N1=14 developers to reveal the advantages and shortcomings of the toolkit from a development point of view. The same application was also used in a usability evaluation conducted with N2=22 participants to reveal the advantages and shortcomings from an end-user viewpoint.
- Published
- 2022
168. Two-dimensional Stroke Gesture Recognition : A Survey
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, UCL - SST/ICTM/INGI - Pôle en ingénierie informatique, Magrofuoco, Nathan, Roselli, Paolo, Vanderdonckt, Jean, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, UCL - SST/ICTM/INGI - Pôle en ingénierie informatique, Magrofuoco, Nathan, Roselli, Paolo, and Vanderdonckt, Jean
- Abstract
The expansion of touch-sensitive technologies, ranging from smartwatches to wall screens, triggered a wider use of gesture-based user interfaces and encouraged researchers to invent recognizers that are fast and accurate for end-users while being simple enough for practitioners. Since the pioneering work on two-dimensional (2D) stroke gesture recognition based on feature extraction and classification, numerous approaches and techniques have been introduced to classify uni- and multi-stroke gestures, satisfying various properties of articulation-, rotation-, scale-, and translation-invariance. As the domain abounds in different recognizers, it becomes difficult for the practitioner to choose the right recognizer, depending on the application and for the researcher to understand the state-of-the-art. To address these needs, a targeted literature review identified 16 significant 2D stroke gesture recognizers that were submitted to a descriptive analysis discussing their algorithm, performance, and properties, and a comparative analysis discussing their similarities and differences. Finally, some opportunities for expanding 2D stroke gesture recognition are drawn from these analyses.
- Published
- 2022
169. Towards evaluation of graphical user interfaces based on a reproducible quality model : application to visual design aesthetics
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, UCL - Ecole Polytechnique de Louvain, Vanderdonckt, Jean, Van Roy, Peter, Zen, Mathieu, Dupuy-Chessa, Sophie, Seffah, Ahmed, Burny, Nicolas, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, UCL - Ecole Polytechnique de Louvain, Vanderdonckt, Jean, Van Roy, Peter, Zen, Mathieu, Dupuy-Chessa, Sophie, Seffah, Ahmed, and Burny, Nicolas
- Abstract
User interfaces (UI) are one of the main media of interaction with the devices we use on a daily basis. Evaluation of UIs is and will remain a topical research domain as long as existing evaluation methods are refined, improved and new methods appear to evaluate new types of UIs. However, not all the existing evaluation methods include explicit, formal, calculable and interpretable quality models that are a necessary condition to ensure their replicability. The large variety of evaluation methods and studies coupled to other various factors such as the hardly scalable and error-prone process of dataset creation led to a replicability crisis in the field of Human-Computer Interactions. In order to address the aformentioned challenges and concerns, there is a need to provide the field of experimental studies on GUI visual design with methodologies and tools that make the process of dataset creation efficient and realistic to foster the replicability of experimental studies. The purpose of this thesis is to provide a way of evaluating the quality of GUIs in an explicit, formal, calculable and thus reproducible way. This thesis introduces, motivates, defines, implements and validates a methodology, based on a conceptual model and supported by a software tool, for producing explicit, formal, calculable and interpretable quality models for evaluation of GUI visual design., (FSA - Sciences de l'ingénieur) -- UCL, 2022
- Published
- 2022
170. Model-based intelligent user interface adaptation: challenges and future directions
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Abrahão, Silvia, Insfran Pelozo, Emilio, Sluÿters, Arthur, Vanderdonckt, Jean, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Abrahão, Silvia, Insfran Pelozo, Emilio, Sluÿters, Arthur, and Vanderdonckt, Jean
- Abstract
Adapting the user interface of a software system to the requirements of the context of use continues to be a major challenge, particularly when users become more demanding in terms of adaptation quality. A considerable number of methods have, over the past three decades, provided some form of modelling with which to support user interface adaptation. There is, however, a crucial issue as regards in analysing the concepts, the underlying knowledge, and the user experience afforded by these methods as regards comparing their benefits and shortcomings. These methods are so numerous that positioning a new method in the state of the art is challenging. This paper, therefore, defines a conceptual reference framework for intelligent user interface adaptation containing a set of conceptual adaptation properties that are useful for model-based user interface adaptation. The objective of this set of properties is to understand any method, to compare various methods and to generate new ideas for adaptation. We also analyse the opportunities that machine learning techniques could provide for data processing and analysis in this context, and identify some open challenges in order to guarantee an appropriate user experience for end-users. The relevant literature and our experience in research and industrial collaboration have been used as the basis on which to propose future directions in which these challenges can be addressed
- Published
- 2022
171. Consistent, Continuous, and Customizable Mid-Air Gesture Interaction for Browsing Multimedia Objects on Large Displays
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Sluÿters, Arthur, Sellier, Quentin, Vanderdonckt, Jean, Parthiban, Vik, Maes, Pattie, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Sluÿters, Arthur, Sellier, Quentin, Vanderdonckt, Jean, Parthiban, Vik, and Maes, Pattie
- Abstract
Browsing multimedia objects, such as photos, videos, documents, and maps represents a frequent activity in a context of use where an end-user interacts on a large vertical display close to bystanders, such as a meeting in a corporate environment or a family display at home. In these contexts, mid-air gesture interaction is suitable for a large variety of end-users, provided that gestures are consistently mapped to similar functions across media types. We present Lui (Large User Interface), a ready-to-deploy and to-use application for browsing multimedia objects by consistent mid-air gesture interaction on a large display that is customizable by mapping new gesture classes to functions in real-time. The method followed to design the gesture interaction and to develop the application consists of four stages: (1) a contextual gesture elicitation study (23 participants × 18 referents = 414 proposed gestures) is conducted with the various media types to determine a consensus set satisfying consistency, (2) the continuous integration of this consensus set with gesture recognizers into a pipeline software architecture, (3) a comparative testing of these recognizers on the consensus set to configure the pipeline with the most efficient ones, and (4) an evaluation of the interface regarding its global quality and specific to the implemented gestures.
- Published
- 2022
172. A Systematic Procedure for Comparing Template-Based Gesture Recognizers
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Ousmer, Mehdi, Sluÿters, Arthur, Magrofuoco, Nathan, Roselli, Paolo, Vanderdonckt, Jean, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Ousmer, Mehdi, Sluÿters, Arthur, Magrofuoco, Nathan, Roselli, Paolo, and Vanderdonckt, Jean
- Abstract
To consistently compare gesture recognizers under identical conditions, a systematic procedure for comparative testing should investigate how the number of templates, the number of sampling points, the number of fingers, and their configuration with other hand parameters such as hand joints, palm, and fingertips impact performance. This paper defines a systematic procedure for comparing recognizers using a series of test definitions, i.e. an ordered list of test cases with controlled variables common to all test cases. For each test case, its accuracy is measured by the recognition rate and its responsiveness by the execution time. This procedure is applied to six state-of-the-art template-based gesture recognizers on SHREC2019, a gesture dataset that contains simple and complex hand gestures tested and is largely used in the literature for competition in a user-independent scenario, and on Jackknife-lm, another challenging dataset. The results of the procedure identify the configurations in which each recognizer is the most accurate or the fastest.
- Published
- 2022
173. Theoretically-Defined vs. User-Defined Squeeze Gestures
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Villarreal Narvaez, Santiago, Sluÿters, Arthur, Vanderdonckt, Jean, Mbaki Luzayisu, Efrem, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Villarreal Narvaez, Santiago, Sluÿters, Arthur, Vanderdonckt, Jean, and Mbaki Luzayisu, Efrem
- Abstract
This paper presents theoretical and empirical results about user-defined gesture preferences for squeezable objects by focusing on a particular object: a deformable cushion. We start with a theoretical analysis of potential gestures for this squeezable object by defining a multi-dimension taxonomy of squeeze gestures composed of 82 gesture classes. We then empirically analyze the results of a gesture elicitation study resulting in a set of N=32 participants X 21 referents = 672 elicited gestures, further classified into 26 gesture classes. We also contribute to the practice of gesture elicitation studies by explaining why we started from a theoretical analysis (by systematically exploring a design space of potential squeeze gestures) to end up with an empirical analysis (by conducting a gesture elicitation study afterward): the intersection of the results from these sources confirm or disconfirm consensus gestures. Based on these findings, we extract from the taxonomy a subset of recommended gestures that give rise to design implications for gesture interaction with squeezable objects.
- Published
- 2022
174. Feature-based context-oriented software development
- Author
-
UCL - SST/ICTM/INGI - Pôle en ingénierie informatique, UCL - Ecole Polytechnique de Louvain, Mens, Kim, Dumas, Bruno, Pecheur, Charles, Vanderdonckt, Jean, Cardozo, Nicolás, Luyten, Kris, Duhoux, Benoît, UCL - SST/ICTM/INGI - Pôle en ingénierie informatique, UCL - Ecole Polytechnique de Louvain, Mens, Kim, Dumas, Bruno, Pecheur, Charles, Vanderdonckt, Jean, Cardozo, Nicolás, Luyten, Kris, and Duhoux, Benoît
- Abstract
Context-oriented programming enables dynamic software evolution by supporting the creation of software systems that dynamically adapt their behaviour depending on the context of their surrounding environment. Upon sensing a new particular situation in the surrounding environment, which is reified as a context, the system activates this context and then continues by selecting and activating fine-grained features corresponding to that context. These features, representing functionalities specific to that context, are then installed in the system to refine its behaviour at runtime. Conceiving such systems is complex due to their high dynamicity and the combinatorial explosion of possible contexts and corresponding features that could be active. To address this complexity, we propose a feature-based context-oriented software development approach to design and implement context-oriented applications. This approach unifies context-oriented programming, feature modelling and dynamic software product lines into a single paradigm. In this novel paradigm we separate clearly and explicitly contexts and features that we model in terms of a context model, a feature model and the mapping between them. We also design an architecture, implement a programming framework, and develop a supporting development methodology and two visualisation tools to help designers and programmers in their modelling, development and debugging tasks. Furthermore we also develop a user interface library in our approach to create applications with user interfaces that are adaptive. To validate our feature-based context-oriented software development approach, we designed five case studies and implemented three of them. Then we discussed the design qualities to evaluate our implementation of the programming framework. We also assessed the usability of the programming framework from our own perspective based on the cognitive dimensions of notations framework. Finally we also conducted four user studies with, (FSA - Sciences de l'ingénieur) -- UCL, 2022
- Published
- 2022
175. Proceedings of the 6th International Conference on Computer-Human Interaction Research and Applications
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, da Silva, Hugo Plácido, Vanderdonckt, Jean, Holzinger, Andreas, Constantine, Larry, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, da Silva, Hugo Plácido, Vanderdonckt, Jean, Holzinger, Andreas, and Constantine, Larry
- Abstract
This volume contains the proceedings of the 6th International Conference on Computer-Human Interaction Research and Applications (CHIRA 2022), which was held in Valletta, Malta as a hybrid event, from 27 to 28 October. CHIRA is sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC), and is held in cooperation with the ACM Special Interest Group on Management Information Systems and the European Society for Socially Embedded Technologies (EUSSET). The purpose of CHIRA is to bring together professionals, academics and students who are interested in the advancement of research and practical applications of human-technology & human-computer interaction. Different aspects of Computer-Human Interaction were covered in four parallel tracks: Human Factors and Information Systems, Interactive Devices, Interaction Design, and Adaptive and Intelligent Systems. Human-Computer Interaction is getting renewed interest as human-ai interaction due to the increasing success of artificial intelligence and its applications. In addition to paper presentations, CHIRA’s program included three invited talks delivered by internationally distinguished speakers: Alan Dix (Swansea University, United Kingdom), Truth in an Age of Information; Karen Holtzblatt (InContext Design, United States), Contextual Design: Origins, Ethics, and Diverse Voices; and Abigail Sellen (Microsoft Research Cambridge, United Kingdom), Transforming the Future of Work through Human-Centred Design. CHIRA received 36 paper submissions from 17 countries, of which 17% were accepted as full papers. The high quality of the papers received imposed difficult choices during the review process. To evaluate each submission, a double-blind paper review was performed by the Program Committee, whose members are highly qualified independent researchers in the CHIRA topic areas. All presented papers will be submitted for indexation by Scopus, Google Scholar, DBLP Computer Science B
- Published
- 2022
176. Creation and Validation of a Model to Understand the Impact of Gestural Interaction Technology on the Customer Experience
- Author
-
UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Sellier, Quentin, Poncin, Ingrid, Vanderdonckt, Jean, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Sellier, Quentin, Poncin, Ingrid, and Vanderdonckt, Jean
- Abstract
Nowadays, creating a positive customer experience is a key source of competitive advantage (Lemon & Verhoef, 2016). A good experience makes a person five times more likely to recommend a company and more likely to purchase in the future (Yohn, 2019). Besides, gestural interaction (Daugherty et al., 2015) technology appears as a promising way to provide individuals a more immersive and richer experience than with classical user interfaces (Vanderdonckt & Vatavu, 2018). However, we currently ignore the concrete impact of this type of interaction on the customer experience, whereas it is necessary for an efficient adoption of such interfaces. The aim of this work is therefore to overcome this gap in order to provide a sufficient understanding allowing to efficiently design such interfaces, with the objective of providing a more complete and rich customer experience. In order to do so, we selected the most representative variables to assess and constructed a conceptual model. We then build a digital catalog controllable by our mid-air hand gestures. After a data collection phase in a controlled laboratory environment, we analyzed the quantitative and qualitative data. Doing so allowed us to answer to our research questions: “what are the key aspects to consider when designing gestural interaction interfaces?” and “how does the gestural interaction impact the customer experience?”. This work contributes to the current literature by providing understanding of the impact of new technologies in retailing, and particularly in the case of 3D gestural interaction. From a managerial point of view, we provide guidelines for the use of this type of new technology and its impact on the customer experience.
- Published
- 2022
177. Enabling astronaut self-scheduling using a robust modelling and scheduling system (RAMS) : A Mars analog use case
- Author
-
UCL - SSS/LDRI - Louvain Drug Research Institute, UCL - SSS/IREC/NMSK - Neuro-musculo-skeletal Lab, UCL - SST/ICTM - Institute of Information and Communication Technologies, Electronics and Applied Mathematics, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Saint-Guillain, Michael, Vanderdonckt, Jean, Burny, Nicolas, Pletser, Vladimir, Vaquero; Tiago, Chien, Steve, Vaquero; Tiago Chien, Steve Karl, Alexander Comein, Audrey[UCL] Chamart, Cheyenne, Karl, Alexander, Comein, Audrey, Chamart, Cheyenne, Wain, Cyril, Casla, Ignacio S., Jacobs, Jean, Manon, Julie, Meert, Julien, Drouet, Sirga, Jet Propulsion Laboratory (JPL), NASA, California, UCL - SSS/LDRI - Louvain Drug Research Institute, UCL - SSS/IREC/NMSK - Neuro-musculo-skeletal Lab, UCL - SST/ICTM - Institute of Information and Communication Technologies, Electronics and Applied Mathematics, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, Saint-Guillain, Michael, Vanderdonckt, Jean, Burny, Nicolas, Pletser, Vladimir, Vaquero; Tiago, Chien, Steve, Vaquero; Tiago Chien, Steve Karl, Alexander Comein, Audrey[UCL] Chamart, Cheyenne, Karl, Alexander, Comein, Audrey, Chamart, Cheyenne, Wain, Cyril, Casla, Ignacio S., Jacobs, Jean, Manon, Julie, Meert, Julien, Drouet, Sirga, and Jet Propulsion Laboratory (JPL), NASA, California
- Published
- 2022
178. Gestural-Vocal Coordinated Interaction on Large Displays
- Author
-
Program in Media Arts and Sciences (Massachusetts Institute of Technology), Parthiban, Vik, Maes, Pattie, Sellier, Quentin, Slu?ters, Arthur, Vanderdonckt, Jean, Program in Media Arts and Sciences (Massachusetts Institute of Technology), Parthiban, Vik, Maes, Pattie, Sellier, Quentin, Slu?ters, Arthur, and Vanderdonckt, Jean
- Published
- 2022
179. Getting users involved in aligning their needs with business processes models and systems
- Author
-
Sousa, Kenia, Mendonça, Hildeberto, Lievyns, Amandine, Vanderdonckt, Jean, Smolnik, Stefan, Urbach, Nils, and Fjermestad, Jerry L.
- Published
- 2011
- Full Text
- View/download PDF
180. Computer-Aided Design of Menu Bar and Pull-Down Menus for Business Oriented Applications
- Author
-
Vanderdonckt, Jean, Hansmann, W., editor, Purgathofer, W., editor, Sillion, F., editor, Duke, David, editor, and Puerta, Angel, editor
- Published
- 1999
- Full Text
- View/download PDF
181. Evaluating a graphical notation for modelling software development methodologies
- Author
-
Sousa, Kenia, Vanderdonckt, Jean, Henderson-Sellers, Brian, and Gonzalez-Perez, Cesar
- Published
- 2012
- Full Text
- View/download PDF
182. Hand Gesture Recognition for an Off-the-Shelf Radar by Electromagnetic Modeling and Inversion
- Author
-
Sluÿters, Arthur, primary, Lambot, Sébastien, additional, and Vanderdonckt, Jean, additional
- Published
- 2022
- Full Text
- View/download PDF
183. Enabling astronaut self-scheduling using a robust modelling and scheduling system (RAMS) : A Mars analog use case
- Author
-
Saint-Guillain, Michael, Vanderdonckt, Jean, Burny, Nicolas, Pletser, Vladimir, Vaquero, Tiago, Chien, Steve, Karl, Alexander, Comein, Audrey, Chamart, Cheyenne, Wain, Cyril, Casla, Ignacio S., Jacobs, Jean, Manon, Julie, Meert, Julien, Drouet, Sirga, UCL - SSS/LDRI - Louvain Drug Research Institute, UCL - SSS/IREC/NMSK - Neuro-musculo-skeletal Lab, UCL - SST/ICTM - Institute of Information and Communication Technologies, Electronics and Applied Mathematics, and UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations
- Published
- 2022
184. RepliGES and GEStory: Visual Tools for Systematizing and Consolidating Knowledge on User-Defined Gestures
- Author
-
Gheran, Bogdan-Florin, Villarreal Narvaez, Santiago, Vatavu, Radu-Daniel, Vanderdonckt, Jean, International Conference on Advanced Visual Interfaces (AVI 2022), and UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations
- Subjects
Gestural input ,Empirical studies in interaction design ,User interface design ,Participatory design - Abstract
The body of knowledge accumulated by gesture elicitation studies (GES), although useful, large, and extensive, is also heterogeneous, scattered in the scientific literature across different venues and fields of research, and difficult to generalize to other contexts of use represented by different gesture types, sensing devices, applications, and user categories. To address such aspects, we introduce RepliGES, a conceptual space that supports (1) replications of gesture elicitation studies to confirm, extend, and complete previous findings, (2) reuse of previously elicited gesture sets to enable new discoveries, and (3) extension and generalization of previous findings with new methods of analysis and for new user populations towards consolidated knowledge of user-defined gestures. Based on RepliGES, we introduce GEStory, an interactive design space and visual tool, to structure, visualize and identify user-defined gestures from a number of 216 published gesture elicitation studies.
- Published
- 2022
185. Key Activities for a Development Methodology of Interactive Applications
- Author
-
Bodar, François, Hennebert, Anne-Marie, Leheureux, Jean-Marie, Provot, Isabelle, Vanderdonckt, Jean, Zucchinetti, Giovanni, Benyon, David, editor, and Palanque, Philippe, editor
- Published
- 1996
- Full Text
- View/download PDF
186. Computer-Aided Window Identification in Trident
- Author
-
Bodart, François, Hennebert, Anne-Marie, Leheureux, Jean-Marie, Vanderdonckt, Jean, Nordby, Knut, editor, Helmersen, Per, editor, Gilmore, David J., editor, and Arnesen, Svein A., editor
- Published
- 1995
- Full Text
- View/download PDF
187. Accessing Guidelines Information with Sierra
- Author
-
Vanderdonckt, Jean, Nordby, Knut, editor, Helmersen, Per, editor, Gilmore, David J., editor, and Arnesen, Svein A., editor
- Published
- 1995
- Full Text
- View/download PDF
188. Towards a Systematic Building of Software Architecture: the TRIDENT Methodological Guide
- Author
-
Bodart, François, Hennebert, Anne-Marie, Leheureux, Jean-Marie, Provot, Isabelle, Sacré, Benoît, Vanderdonckt, Jean, Palanque, Philippe, editor, and Bastide, Rémi, editor
- Published
- 1995
- Full Text
- View/download PDF
189. A Model-Based Approach to Presentation: A Continuum from Task Analysis to Prototype
- Author
-
Bodart, François, Hennebert, Anne-Marie, Leheureux, Jean-Marie, Vanderdonckt, Jean, Hewitt, W. T., editor, Hansmann, W., editor, and Paternó, Fabio, editor
- Published
- 1995
- Full Text
- View/download PDF
190. Architecture elements for highly-interactive business-oriented applications
- Author
-
Bodart, François, Hennebert, Anne -Marie, Leheureux, Jean -Marie, Sacré, Isabelle, Vanderdonckt, Jean, Goos, G., editor, Hartmanis, J., editor, Bass, Leonard J., editor, Gornostaev, Juri, editor, and Unger, Claus, editor
- Published
- 1993
- Full Text
- View/download PDF
191. Toward Usable Intelligent User Interface
- Author
-
Mezhoudi, Nesrine, primary, Khaddam, Iyad, additional, and Vanderdonckt, Jean, additional
- Published
- 2015
- Full Text
- View/download PDF
192. Generating systems from multiple sketched models
- Author
-
Schmieder, Paul, Plimmer, Beryl, and Vanderdonckt, Jean
- Published
- 2010
- Full Text
- View/download PDF
193. Taking That Perfect Aerial Photo: A Synopsis of Interactions for Drone-based Aerial Photography and Video
- Author
-
Siean, Alexandru-Ionut, Vatavu, Radu-Daniel, Vanderdonckt, Jean, ACM International Conference on Interactive Media Experiences (IMX '21), UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, and UCL - SST/ICTM/INGI - Pôle en ingénierie informatique
- Subjects
Aerial photos ,Aerial photography ,Intersection ,Computer science ,Computer graphics (images) ,User interface design ,Avionics ,Photography ,Applied computing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Scientific literature ,Drone - Abstract
Personal drones are more and more present in our lives and acting as “flying cameras” is one of their most prominent applications. In this work, we conduct a synopsis of the scientific literature on human-drone interaction to identify system functions and corresponding commands for controlling drone-based aerial photography and video, from which we compile a dictionary of interactions. We also discuss opportunities for more research at the intersection of drone computing, augmented vision, and personal photography.
- Published
- 2021
194. Crosside: A Cross-Surface Collaboration by Sketching Design Space
- Author
-
Perez Medina, Jorge Luis, Vanderdonckt, Jean, Villarreal Narvaez, Santiago, 25th International Conference on Visualization and Visual Languages, UCL - SSH/LouRIM - Louvain Research Institute in Management and Organizations, and UCL - SST/ICTM/INGI - Pôle en ingénierie informatique
- Subjects
Surface (mathematics) ,Design by sketching ,user interface ,Cross-surface collaboration ,Engineering drawing ,collaborative design ,Computer science ,Computer-Supported Collaborative Work ,Design space ,Human-computer interaction - Abstract
This paper introduces, motivates, defines, and exemplifies CROSSIDE, a design space for representing capabilities of a software for collaborative sketching in a cross-surface setting, i.e., when stakeholders are interacting with and across multiple interaction surfaces, ranging from low end devices such as smartwatches, mobile phones to high-end devices like wall displays. By determining the greatest common denominator in terms of system properties between forty-one references, the design space is structured according to seven dimensions: user configurations, surface configurations, input interaction techniques, work methods, tangibility, and device configurations. This design space is aimed at satisfying three virtues: descriptive (i.e., the ability to systematically describe any particular work in cross-surface interaction by sketching), comparative (i.e., the ability to consistently compare two or more works belonging to this area), and generative (i.e., the ability to generate new ideas by identifying potentially interesting, undercovered areas). A radar diagram graphically depicts the design space for these three virtues.
- Published
- 2019
195. A methodology for designing information security feedback based on User Interface Patterns
- Author
-
Muñoz-Arteaga, Jaime, González, Ricardo Mendoza, Martin, Miguel Vargas, Vanderdonckt, Jean, and Álvarez-Rodríguez, Francisco
- Published
- 2009
- Full Text
- View/download PDF
196. Two-dimensional Stroke Gesture Recognition: A Survey.
- Author
-
MAGROFUOCO, NATHAN, ROSELLI, PAOLO, and VANDERDONCKT, JEAN
- Subjects
GESTURE ,FEATURE extraction ,ALGORITHMS ,SMARTWATCHES ,USER interfaces - Abstract
The expansion of touch-sensitive technologies, ranging from smartwatches to wall screens, triggered a wider use of gesture-based user interfaces and encouraged researchers to invent recognizers that are fast and accurate for end-users while being simple enough for practitioners. Since the pioneeringwork on two-dimensional (2D) stroke gesture recognition based on feature extraction and classification, numerous approaches and techniques have been introduced to classify uni- and multi-stroke gestures, satisfying various properties of articulation-, rotation-, scale-, and translation-invariance. As the domain abounds in different recognizers, it becomes difficult for the practitioner to choose the right recognizer, depending on the application and for the researcher to understand the state-of-the-art. To address these needs, a targeted literature review identified 16 significant 2D stroke gesture recognizers that were submitted to a descriptive analysis discussing their algorithm, performance, and properties, and a comparative analysis discussing their similarities and differences. Finally, some opportunities for expanding 2D stroke gesture recognition are drawn from these analyses. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
197. Advance human–machine interface automatic evaluation
- Author
-
González Calleros, Juan Manuel, Guerrero García, Josefina, and Vanderdonckt, Jean
- Published
- 2013
- Full Text
- View/download PDF
198. Two-dimensional Stroke Gesture Recognition
- Author
-
Magrofuoco, Nathan, primary, Roselli, Paolo, additional, and Vanderdonckt, Jean, additional
- Published
- 2021
- Full Text
- View/download PDF
199. Erratum to: Human Error, Safety and Systems Development
- Author
-
Palanque, Philippe, Vanderdonckt, Jean, Winckler, Marco, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Palanque, Philippe, editor, Vanderdonckt, Jean, editor, and Winckler, Marco, editor
- Published
- 2010
- Full Text
- View/download PDF
200. Context-Aware Adaptation of Service Front-Ends
- Author
-
Caminero Gil, Francisco Javier, Paternò, Fabio, Vanderdonckt, Jean, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Paternò, Fabio, editor, de Ruyter, Boris, editor, Markopoulos, Panos, editor, Santoro, Carmen, editor, van Loenen, Evert, editor, and Luyten, Kris, editor
- Published
- 2012
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.