107 results on '"Kris Luyten"'
Search Results
2. Substitute Buttons: Exploring Tactile Perception of Physical Buttons for Use as Haptic Proxies
- Author
-
Bram van Deurzen, Gustavo Alberto Rovelo Ruiz, Daniël M. Bot, Davy Vanacken, and Kris Luyten
- Subjects
haptic feedback ,button interaction ,virtual reality ,encountered-type haptics ,Technology ,Science - Abstract
Buttons are everywhere and are one of the most common interaction elements in both physical and digital interfaces. While virtual buttons offer versatility, enhancing them with realistic haptic feedback is challenging. Achieving this requires a comprehensive understanding of the tactile perception of physical buttons and their transferability to virtual counterparts. This research investigates tactile perception concerning button attributes such as shape, size, and roundness and their potential generalization across diverse button types. In our study, participants interacted with each of the 36 buttons in our search space and provided a response to which one they thought they were touching. The findings were used to establish six substitute buttons capable of effectively emulating tactile experiences across various buttons. In a second study, these substitute buttons were validated against virtual buttons in VR. Highlighting the potential use of the substitute buttons as haptic proxies for applications such as encountered-type haptics.
- Published
- 2024
- Full Text
- View/download PDF
3. Improving the Translation Environment for Professional Translators
- Author
-
Vincent Vandeghinste, Tom Vanallemeersch, Liesbeth Augustinus, Bram Bulté, Frank Van Eynde, Joris Pelemans, Lyan Verwimp, Patrick Wambacq, Geert Heyman, Marie-Francine Moens, Iulianna van der Lek-Ciudin, Frieda Steurs, Ayla Rigouts Terryn, Els Lefever, Arda Tezcan, Lieve Macken, Véronique Hoste, Joke Daems, Joost Buysschaert, Sven Coppers, Jan Van den Bergh, and Kris Luyten
- Subjects
computer-aided translation ,machine translation ,speech translation ,translation memory-machine translation integration ,user interface ,domain-adaptation ,human-computer interface ,Information technology ,T58.5-58.64 - Abstract
When using computer-aided translation systems in a typical, professional translation workflow, there are several stages at which there is room for improvement. The SCATE (Smart Computer-Aided Translation Environment) project investigated several of these aspects, both from a human-computer interaction point of view, as well as from a purely technological side. This paper describes the SCATE research with respect to improved fuzzy matching, parallel treebanks, the integration of translation memories with machine translation, quality estimation, terminology extraction from comparable texts, the use of speech recognition in the translation process, and human computer interaction and interface design for the professional translation environment. For each of these topics, we describe the experiments we performed and the conclusions drawn, providing an overview of the highlights of the entire SCATE project.
- Published
- 2019
- Full Text
- View/download PDF
4. Semi-automatic extraction of digital work instructions from CAD models
- Author
-
Dorothy Gors, Maarten Witters, Jeroen Put, Kris Luyten, Bram Vanherle, Gors, Dorothy, PUT, Jeroen, VANHERLE, Bram, witters, maarten, and LUYTEN, Kris
- Subjects
0209 industrial biotechnology ,Engineering drawing ,Sequence ,Heuristic ,Computer science ,Visibility (geometry) ,Process (computing) ,CAD ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Task (computing) ,020901 industrial engineering & automation ,Component (UML) ,General Earth and Planetary Sciences ,Weaving ,0105 earth and related environmental sciences ,General Environmental Science - Abstract
Currently process engineers are using documents or authoring tools to bring the assembly instructions to the work floor. This is a time-consuming task, as instructions need to be created for each assembly operation. Furthermore, the engineer needs to be familiar with the assembly sequence. To assist the engineer, a tool is developed that i) uses a heuristic based on visibility, part similarity and proximity to semi-automatically determine the assembly sequence from a CAD model and ii) according to the computed sequence generates digital work instructions including visualizations and animations extracted from the CAD model. In essence, the assembly sequence generation works reversely: it determines the order in which components can be removed from the assembly, by evaluating whether the visibility of a component is obstructed by the remaining assembly. The reversed order is then returned as assembly sequence. During this process the engineer can modify the proposed sequence, add annotations and alter the vi-sualizations of the proposed instructions, i.e., images or 3D-animations. We illustrate that the developed tool effectively supports process engineers and speeds up the creation of digital work instructions by some industrial validation cases, e.g., the assembly of a weaving machine. This research is supported by VLAIO (Flanders Innovation & En- trepreneurship) Flanders Make, the strategic center for the man- ufacturing industry in Flanders, within the framework of the Op- eratorKnowledge ICON-project ( HBC.2017.0395 ). We like to thank Jens Brulmans, Niels Billen, Robbe Cools and Georges Verpoorten for their contributions to the implementation, and our partners Arkite, Augnition, Barco, CNHi, Atlas Copco, ASCO and Picanol for providing test data and context for this work.
- Published
- 2021
- Full Text
- View/download PDF
5. An Interactive Design Space for Wearable Displays
- Author
-
Kris Luyten, Florian Heller, Kashyap Todi, HELLER, Florian, TODI, Kashyap, LUYTEN, Kris, Hasselt University, Department of Communications and Networking, Aalto-yliopisto, and Aalto University
- Subjects
Reflection (computer programming) ,Wearable Computing ,Computer science ,Interactive design ,Wearable computer ,Space (commercial competition) ,Field (computer science) ,Identification (information) ,Displays ,Human–computer interaction ,Key (cryptography) ,Survey ,Design space ,Design Space - Abstract
Funding Information: This work was partly funded by the Department of Communications and Networking (Comnet, Aalto University), and Academy of Finland projects ‘Human Automata’ and ‘BAD’. This research was partially funded by Flanders Make, the strategic research centre for the manufacturing industry, through its projects ‘SmartHandler’ and ‘Ergo-EyeHand’. Publisher Copyright: © 2021 ACM. The promise of on-body interactions has led to widespread development of wearable displays. They manifest themselves in highly variable shapes and form, and are realized using technologies with fundamentally different properties. Through an extensive survey of the field of wearable displays, we characterize existing systems based on key qualities of displays and wearables, such as location on the body, intended viewers or audience, and the information density of rendered content. We present the results of this analysis in an open, web-based interactive design space that supports exploration and refinement along various parameters. The design space, which currently encapsulates 129 cases of wearable displays, aims to inform researchers and practitioners on existing solutions and designs, and enable the identification of gaps and opportunities for novel research and applications. Further, it seeks to provide them with a thinking tool to deliberate on how the displayed content should be adapted based on key design parameters. Through this work, we aim to facilitate progress in wearable displays, informed by existing solutions, by providing researchers with an interactive platform for discovery and reflection.
- Published
- 2021
- Full Text
- View/download PDF
6. Individualising Graphical Layouts with Predictive Visual Search Models
- Author
-
Kashyap Todi, Jussi P. P. Jokinen, Antti Oulasvirta, Kris Luyten, Department of Communications and Networking, Flanders Make, Aalto-yliopisto, and Aalto University
- Subjects
Visual search ,Cognitive model ,Computational design ,Graphical layouts ,Information retrieval ,Computer science ,Interface (computing) ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Human-Computer Interaction ,Serial position effect ,graphical layouts ,computational design ,adaptive user interfaces ,Template ,Artificial Intelligence ,Adaptive user interfaces ,Human visual system model ,Web page ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Web navigation ,050107 human factors - Abstract
In domains where users are exposed to large variations in visuo-spatial features among designs, they often spend excess time searching for common elements (features) on an interface. This article contributes individualised predictive models of visual search, and a computational approach to restructure graphical layouts for an individual user such that features on a new, unvisited interface can be found quicker. It explores four technical principles inspired by the human visual system (HVS) to predict expected positions of features and create individualised layout templates: (I) the interface with highest frequency is chosen as the template; (II) the interface with highest predicted recall probability (serial position curve) is chosen as the template; (III) the most probable locations for features across interfaces are chosen (visual statistical learning) to generate the template; (IV) based on a generative cognitive model, the most likely visual search locations for features are chosen (visual sampling modelling) to generate the template. Given a history of previously seen interfaces, we restructure the spatial layout of a new (unseen) interface with the goal of making its features more easily findable. The four HVS principles are implemented in Familiariser, a web browser that automatically restructures webpage layouts based on the visual history of the user. Evaluation of Familiariser (using visual statistical learning) with users provides first evidence that our approach reduces visual search time by over 10%, and number of eye-gaze fixations by over 20%, during web browsing tasks. The project has partially received funding from the Academy of Finland project COMPUTED and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement 637991).
- Published
- 2019
- Full Text
- View/download PDF
7. Enhancing Patient Motivation through Intelligibility in Cardiac Tele-rehabilitation
- Author
-
Kris Luyten, Dominique Hansen, Karin Coninx, Paul Dendale, Supraja Sankaran, SANKARAN, Supraja, LUYTEN, Kris, HANSEN, Dominique, DENDALE, Paul, and CONINX, Karin
- Subjects
medicine.medical_specialty ,business.industry ,030204 cardiovascular system & hematology ,Intelligibility (communication) ,Human-Computer Interaction ,Clinical study ,03 medical and health sciences ,0302 clinical medicine ,Tele-rehabilitation ,Physical therapy ,medicine ,030212 general & internal medicine ,Patient motivation ,business ,Software - Abstract
Physical exercise training and medication compliance are primary components of cardiac rehabilitation. When rehabilitating independently at home, patients often fail to comply with their prescribed medication and find it challenging to interpret exercise targets or be aware of the expected efforts. Our work aims to assist cardiac patients in understanding their condition better, promoting medication adherence and motivating them to achieve their exercise targets in a tele-rehabilitation setting. We introduce a patient-centric intelligible visualization approach to present prescribed medication and exercise targets to patients. We assessed efficacy of intelligible visualizations on patients’ comprehension in two lab studies. We evaluated the impact on patient motivation and health outcomes in field studies. Patients were able to adhere to medication prescriptions, manage their physical exercises, monitor their progress and gained better self-awareness on how they achieved their rehabilitation targets. Patients confirmed that the intelligible visualizations motivated them to achieve their targets better. We observed an improvement in overall physical activity levels and health outcomes of patients. Research Highlights Presents challenges currently faced in cardiac tele-rehabilitation. Demonstrates how intelligibility was applied to two core aspects of cardiac rehabilitation- promoting medication adherence and physical exercise training. Lab., field and clinical studies to demonstrate efficacy of intelligible visualization, impact on patient motivation and resultant health outcomes. Reflection on how similar HCI approaches could be leveraged for technology-supported management of critical health conditions such as cardiac diseases.
- Published
- 2019
- Full Text
- View/download PDF
8. Model-based Engineering of Feedforward Usability Function for GUI Widgets
- Author
-
Davy Vanacken, David Navarre, Sven Coppers, Philippe Palanque, Kris Luyten, Interactive Critical Systems (IRIT-ICS), Institut de recherche en informatique de Toulouse (IRIT), Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse 1 Capitole (UT1), and Université Fédérale Toulouse Midi-Pyrénées
- Subjects
Computer science ,media_common.quotation_subject ,Mistake ,02 engineering and technology ,[INFO.INFO-SE]Computer Science [cs]/Software Engineering [cs.SE] ,Presentation ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Function (engineering) ,050107 human factors ,media_common ,business.industry ,Mechanism (biology) ,05 social sciences ,Feed forward ,020207 software engineering ,Usability ,Petri net ,[INFO.INFO-MO]Computer Science [cs]/Modeling and Simulation ,Human-Computer Interaction ,[INFO.INFO-PF]Computer Science [cs]/Performance [cs.PF] ,Action (philosophy) ,business ,Software - Abstract
Feedback and feedforward are two fundamental mechanisms that support users’ activities while interacting with computing devices. While feedback can be easily solved by providing information to the users following the triggering of an action, feedforward is much more complex as it must provide information before an action is performed. For interactive applications where making a mistake has more impact than just reduced user comfort, correct feedforward is an essential step toward correctly informed, and thus safe, usage. Our approach, Fortunettes, is a generic mechanism providing a systematic way of designing feedforward addressing both action and presentation problems. Including a feedforward mechanism significantly increases the complexity of the interactive application hardening developers’ tasks to detect and correct defects. We build upon an existing formal notation based on Petri Nets for describing the behavior of interactive applications and present an approach that allows for adding correct and consistent feedforward.
- Published
- 2021
- Full Text
- View/download PDF
9. Smart Makerspace: A Web Platform Implementation
- Author
-
Adriano Canabarro Teixeira, Gabriel Paludo Licks, and Kris Luyten
- Subjects
lcsh:T58.5-58.64 ,Human–computer interaction ,Computer science ,lcsh:Information technology ,General Engineering ,Workbench ,Space (commercial competition) ,makerspaces, immersive instructional spaces, web platforms ,lcsh:L ,Education ,lcsh:Education - Abstract
Makerspaces are creative and learning environments, home to activities such as fabrication processes and Do-It-Yourself (DIY) tasks. However, containing equipment that are not commonly seen or handled, these spaces can look rather challenging to novice users. This paper is based on the Smart Makerspace research from Autodesk, which uses a smart workbench for an immersive instructional space for DIY tasks. Having its functionalities in mind and trying to overcome some of its limitations, we approach the concept building an immersive instructional space as a web platform. The platform, introduced to users in a makerspace, had a feedback that reflects its potential between novice and intermediate users, for creating facilitators and encouraging these users.
- Published
- 2018
10. The path-to-purchase is paved with digital opportunities: An inventory of shopper-oriented retail technologies
- Author
-
Kim Willems, Annelien Smolders, Johannes Schöning, Kris Luyten, Malaika Brengman, LUYTEN, Kris, SCHOENING, Johannes, Smolders, Annelien, BRENGMAN, Malaika, WILLEMS, Kim, Business, Faculty of Economic and Social Sciences and Solvay Business School, and Social-cultural food-research
- Subjects
Value (ethics) ,retail technology ,Digital era ,Shopping value ,05 social sciences ,path-to-purchase ,shopping value ,smart retailing ,shopper marketing ,Smart retailing ,Cost savings ,Categorization ,Information and Communications Technology ,Management of Technology and Innovation ,0502 economics and business ,Shopper marketing ,050211 marketing ,Business ,Business and International Management ,Marketing ,Servicescape ,050203 business & management ,Applied Psychology ,PATH (variable) - Abstract
This study focuses on innovative ways to digitally instrument the servicescape in bricks-and-mortar retailing. In the present digital era, technological developments allow for augmenting the shopping experience and capturing moments-of-truth along the shopper's path-to-purchase. This article provides an encompassing inventory of retail technologies resulting from a systematic screening of three secondary data sources, over 2008–2016: (1) the academic marketing literature, (2) retailing related scientific ICT publications, and (3) business practices (e.g., publications from retail labs and R&D departments). An affinity diagram approach allows for clustering the retail technologies from an HCI perspective. Additionally, a categorization of the technologies takes place in terms of the type of shopping value that they offer, and the stage in the path-to-purchase they prevail. This indepth analysis results in a comprehensive inventory of retail technologies that allows for verifying the suitability of these technologies for targeted in-store shopper marketing objectives (cf. the resulting online faceted-search repository at www.retail-tech.org). The findings indicate that the majority of the inventoried technologies provide cost savings, convenience and utilitarian value, whereas few offer hedonic or symbolic benefits. Moreover, at present the earlier stages of the path-to-purchase appear to be the most instrumented. The article concludes with a research agenda. This research has been funded by the Digitopia and Flanders Innovation & Entrepreneurship Baekeland grant number 150726 (‘In search of a sustainable competitive advantage: Digitally instrumenting bricks-and-mortar retailing in Flanders’; April 2016–March 2020). The authors furthermore acknowledge Randy Lauriers for her input in the data collection process for this study and the two anonymous reviewers whose suggestions helped in further strengthening a previous version of this manuscript.
- Published
- 2017
- Full Text
- View/download PDF
11. FORTNIoT: Intelligible Predictions to Improve User Understanding of Smart Home Behavior
- Author
-
Kris Luyten, Davy Vanacken, and Sven Coppers
- Subjects
User interface toolkits Additional Key Words and Phrases: Intelligibility, Scrutability, Internet-of-Things, Smart Homes, Simulations, Predictions ,Computer Networks and Communications ,Computer science ,media_common.quotation_subject ,02 engineering and technology ,CCS Concepts: • Human-centered computing → Graphical user interfaces ,Lead (geology) ,Home automation ,Spark (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,050107 human factors ,media_common ,business.industry ,05 social sciences ,GRASP ,020207 software engineering ,Data science ,Variety (cybernetics) ,Human-Computer Interaction ,Debugging ,Hardware and Architecture ,HCI theory, con- cepts and models ,Interaction design theory, concepts and paradigms ,Intelligibility (philosophy) ,Internet of Things ,business ,Interaction paradigms - Abstract
Fig. 1. Based on self-sustaining predictions (e.g. the sun will set), FORTNIoT can deduce when trigger-condition-action rules (e.g. IF sun set AND anyone home THEN lower the rolling shutter) will trigger in the near future and what effects they will cause (e.g. the rolling shutter will lower). Ubiquitous environments, such as smart homes, are becoming more intelligent and autonomous. As a result, their behavior becomes harder to grasp and unintended behavior becomes more likely. Researchers have contributed tools to better understand and validate an environments' past behavior (e.g. logs, end-user debugging), and to prevent unintended behavior. There is, however, a lack of tools that help users understand the future behavior of such an environment. Information about the actions it will perform, and why it will perform them, remains concealed. In this paper, we contribute FORTNIoT, a well-defined approach that combines self-sustaining predictions (e.g. weather forecasts) and simulations of trigger-condition-action rules to deduce when these rules will trigger in the future and what state changes they will cause to connected smart home entities. We implemented a proof-of-concept of this approach, as well as a visual demonstrator that shows such predictions, including causes and effects, in an overview of a smart home's behavior. A between-subject evaluation with 42 participants indicates that FORTNIoT predictions lead to a more accurate understanding of the future behavior, more confidence in that understanding, and more appropriate trust in what the system will (not) do. We envision a wide variety of situations where predictions about the future are beneficial to inhabitants of smart homes, such as debugging unintended behavior and managing conflicts by exception, and hope to spark a new generation of intelligible tools for ubiquitous environments.
- Published
- 2020
12. Improving the Translation Environment for Professional Translators
- Author
-
Jan Van den Bergh, Joost Buysschaert, Frieda Steurs, Kris Luyten, Vincent Vandeghinste, Joris Pelemans, Lyan Verwimp, Liesbeth Augustinus, Tom Vanallemeersch, Geert Heyman, Frank Van Eynde, Lieve Macken, Sven Coppers, Iulianna van der Lek-Ciudin, Bram Bulté, Patrick Wambacq, Marie-Francine Moens, Els Lefever, Joke Daems, Veronique Hoste, Arda Tezcan, Ayla Rigouts Terryn, Linguistics and Literary Studies, Macken, Lieve, Daems, Joke, and Tezcan, Arda
- Subjects
050101 languages & linguistics ,Technology ,Machine translation ,Computer Networks and Communications ,Process (engineering) ,Computer science ,02 engineering and technology ,Translation (geometry) ,computer.software_genre ,speech translation ,domain-adaptation ,Languages and Literatures ,machine translation ,Human–computer interaction ,Speech translation ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Translation technology ,Science & Technology ,lcsh:T58.5-58.64 ,lcsh:Information technology ,Terminology extraction ,Communication ,05 social sciences ,lt3 ,linguistics ,Approximate string matching ,translation memory-machine translation integration ,computer-aided translation ,user interface ,human-computer interface ,Human-Computer Interaction ,MODEL ,Workflow ,Computer Science ,020201 artificial intelligence & image processing ,Computer Science, Interdisciplinary Applications ,User interface ,computer - Abstract
When using computer-aided translation systems in a typical, professional translation workflow, there are several stages at which there is room for improvement. The SCATE (Smart Computer-Aided Translation Environment) project investigated several of these aspects, both from a human-computer interaction point of view, as well as from a purely technological side. This paper describes the SCATE research with respect to improved fuzzy matching, parallel treebanks, the integration of translation memories with machine translation, quality estimation, terminology extraction from comparable texts, the use of speech recognition in the translation process, and human computer interaction and interface design for the professional translation environment. For each of these topics, we describe the experiments we performed and the conclusions drawn, providing an overview of the highlights of the entire SCATE project. The research in this project was funded by the Flemish Agency for Innovation and Technology IWT, project number 13007.
- Published
- 2019
13. Fortunettes
- Author
-
Kris Luyten, Davy Vanacken, Philippe Palanque, David Navarre, Christine Gris, Sven Coppers, Expertise centre for Digital Media [Hasselt] (EDM), Hasselt University (UHasselt), Interactive Critical Systems (IRIT-ICS), Institut de recherche en informatique de Toulouse (IRIT), Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées, Airbus [France], Systemic Change, Centre National de la Recherche Scientifique - CNRS (FRANCE), Institut National Polytechnique de Toulouse - Toulouse INP (FRANCE), Universiteit Hasselt (BELGIUM), Université Toulouse III - Paul Sabatier - UT3 (FRANCE), Université Toulouse - Jean Jaurès - UT2J (FRANCE), Université Toulouse 1 Capitole - UT1 (FRANCE), and Institut National Polytechnique de Toulouse - INPT (FRANCE)
- Subjects
Computer Networks and Communications ,Computer science ,05 social sciences ,Feed forward ,020207 software engineering ,02 engineering and technology ,Intelligence artificielle ,Intelligibility (communication) ,Interface homme-machine ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] ,law.invention ,Feedforward ,Human-Computer Interaction ,law ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Weather radar ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,User interface widgets ,Intelligibility ,User Interface Widgets ,050107 human factors ,Social Sciences (miscellaneous) - Abstract
International audience; Feedback is commonly used to explain what happened in an interface. What if questions, on the other hand, remain mostly unanswered. In this paper, we present the concept of enhanced widgets capable of visualizing their future state, which helps users to understand what will happen without committing to an action. We describe two approaches to extend GUI toolkits to support widget-level feedforward, and illustrate the usefulness of widget-level feedforward in a standardized interface to control the weather radar in commercial aircraft. In our evaluation, we found that users require less clicks to achieve tasks and are more confident about their actions when feedforward information was available. These findings suggest that widget-level feedforward is highly suitable in applications the user is unfamiliar with, or when high confidence is desirable.
- Published
- 2019
- Full Text
- View/download PDF
14. JigFab
- Author
-
Raf Ramakers, Danny Leen, Tom Veuskens, and Kris Luyten
- Subjects
Engineering drawing ,Fabrication ,Computer science ,05 social sciences ,Mortise and tenon ,0202 electrical engineering, electronic engineering, information engineering ,Woodworking ,Domain knowledge ,020207 software engineering ,0501 psychology and cognitive sciences ,02 engineering and technology ,050107 human factors ,Power (physics) - Abstract
We present JigFab, an integrated end-to-end system that supports casual makers in designing and fabricating constructions with power tools. Starting from a digital version of the construction, JigFab achieves this by generating various types of constraints that configure and physically aid the movement of a power tool. Constraints are generated for every operation and are custom to the work piece. Constraints are laser cut and assembled together with predefined parts to reduce waste. JigFab's constraints are used according to an interactive step-by-step manual. JigFab internalizes all the required domain knowledge for designing and building intricate structures, consisting of various types of finger joints, tenon & mortise joints, grooves, and dowels. Building such structures is normally reserved for artisans or automated with advanced CNC machinery.
- Published
- 2019
- Full Text
- View/download PDF
15. Towards Tool-Support for Robot-Assisted Product Creation in Fab Labs
- Author
-
Tom Veuskens, Kris Luyten, Jan Van den Bergh, Bram van Deurzen, Raf Ramakers, Expertise centre for Digital Media [Hasselt] (EDM), Hasselt University (UHasselt), Cristian Bogdan, Kati Kuusinen, Marta Kristín Lárusdóttir, Philippe Palanque, Marco Winckler, TC 13, and WG 13.2
- Subjects
Production line ,0209 industrial biotechnology ,Product creation ,Human-robot collaboration ,Toolkit ,Computer science ,05 social sciences ,02 engineering and technology ,Variation (game tree) ,End-user development ,Variety (cybernetics) ,Low volume ,020901 industrial engineering & automation ,Human–computer interaction ,Production (economics) ,Robot ,[INFO]Computer Science [cs] ,0501 psychology and cognitive sciences ,human-robot collaboration ,toolkit ,end-user development ,050107 human factors - Abstract
Part 4: Tools and Tool-Support; International audience; Collaborative robot-assisted production has great potential for high variety low volume production lines. These type of production lines are common in both personal fabrication settings as well as in several types of flexible production lines. Moreover, many assembly tasks are in fact hard to complete by a single user or a single robot, and benefit greatly from a fluent collaboration between both. However, programming such systems is cumbersome, given the wide variation of tasks and the complexity of instructing a robot how it should move and operate in collaboration with a human user.In this paper we explore the case of collaborative assembly for personal fabrication. Based on a CAD model of the envisioned product, our software analyzes how this can be composed from a set of standardized pieces and suggests a series of collaborative assembly steps to complete the product. The proposed tool removes the need for the end-user to perform additional programming of the robot. We use a low-cost robot setup that is accessible and usable for typical personal fabrication activities in Fab Labs and Makerspaces. Participants in a first experimental study testified that our approach leads to a fluent collaborative assembly process. Based on this preliminary evaluation, we present next steps and potential implications.
- Published
- 2018
- Full Text
- View/download PDF
16. Intellingo
- Author
-
Kris Luyten, Vincent Vandeghinste, Karin Coninx, Iulianna van der Lek-Ciudin, Jan Van den Bergh, Sven Coppers, and Tom Vanallemeersch
- Subjects
User experience design ,H.5.2. User Interfaces: Graphical user interfaces (GUI) ,J.5 ARTS AND HUMANITIES: Linguistics ,business.industry ,Computer science ,Human–computer interaction ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020207 software engineering ,Usability ,02 engineering and technology ,Intelligibility (communication) ,User interface ,business - Abstract
Translation environments offer various translation aids to support professional translators. However, translation aids typically provide only limited justification for the translation suggestions they propose. In this paper we present Intellingo, a translation environment that explores intelligibility for translation aids, to enable more sensible usage of translation suggestions. We performed a comparative study between an intelligible version and a non-intelligible version of Intellingo. The results show that although adding intelligibility does not necessarily result in significant changes to the user experience, translators can better assess translation suggestions without a negative impact on their performance. Intelligibility is preferred by translators when the additional information it conveys benefits the translation process and when this information is not part of the translator’s readily available knowledge. The SCATE (Smart Computer-Aided Translation Environment) project IWT 130041 is funded by the Flemish Institute for the Promotion of the Scientific-Technological Research in the Industry (IWT Vlaanderen).
- Published
- 2018
- Full Text
- View/download PDF
17. SmartObjects
- Author
-
Sebastian Günther, Kris Luyten, Oliver Brdiczka, Markus Funk, Dirk Schnelle-Walka, Max Mühlhäuser, Niloofar Dezfuli, Florian Müller, Tobias Grosse-Puppendahl, Müller, Florian, Schnelle-Walka, Dirk, Grosse-Puppendahl, Tobias, Günther, Sebastian, Funk, Markus, LUYTEN, Kris, Brdiczka, Oliver, Dezfuli, Niloofar, and Mühlhäuser, Max
- Subjects
HCI ,020203 distributed computing ,Focus (computing) ,Computer science ,Smart objects ,business.industry ,Perspective (graphical) ,multimodal and adapter interaction ,context-awareness ,tangi- ble interaction ,020207 software engineering ,Embedded intelligence ,02 engineering and technology ,novel interaction ,User experience design ,Human–computer interaction ,Situated ,0202 electrical engineering, electronic engineering, information engineering ,Author Keywords smart objects ,enabling techologies ACM Classification Keywords H5m [User Interfaces: Miscellaneous] ,Natural (music) ,Context awareness ,embodied interaction ,business - Abstract
The emergence of smart objects has the potential to radically change the way we interact with technology. Through embedded means for input and output, such objects allow for more natural and immediate interaction. The SmartObjects workshop will focus on how such embedded intelligence in objects situated in the user's physical environment can be used to provide more efficient and enjoyable interactions. We discuss the design from the technology and the user experience perspective.
- Published
- 2018
- Full Text
- View/download PDF
18. Hasselt
- Author
-
Kris Luyten, Jan Van den Bergh, Karin Coninx, and Fredy Cuenca
- Subjects
Event-driven programming ,Source code ,Event (computing) ,business.industry ,Computer science ,Programming language ,media_common.quotation_subject ,Software development ,computer.software_genre ,Multimodal interaction ,Callback ,business ,computer ,media_common ,Declarative programming ,Abstraction (linguistics) - Abstract
Implementing multimodal interactions with event-driven languages results in a ‘callback soup', a source code littered with a multitude of flags that have to be maintained in a self-consistent manner and across different event handlers. Prototyping multimodal interactions adds to the complexity and error sensitivity, since the program code has to be refined iteratively as developers explore different possibilities and solutions. The authors present a declarative language for rapid prototyping multimodal interactions: Hasselt permits declaring composite events, sets of events that are logically related because of the interaction they support, that can be easily bound to dedicated event handlers for separate interactions. The authors' approach allows the description of multimodal interactions at a higher level of abstraction than event languages, which saves developers from dealing with the typical ‘callback soup' thereby resulting in a gain in programming efficiency and a reduction in errors when writing event handling code. They compared Hasselt with using a traditional programming language with strong support for events in a study with 12 participants each having a solid background in software development. When performing equivalent modifications to a multimodal interaction, the use of Hasselt leads to higher completion rates, lower completion times, and less code testing than when using a mainstream event-driven language.
- Published
- 2016
- Full Text
- View/download PDF
19. Have You Met Your METs? – Enhancing Patient Motivation to Achieve Physical Activity Targets in Cardiac Tele-rehabilitation
- Author
-
Kris Luyten, Karin Coninx, Supraja Sankaran, Dominique Hansen, and Paul Dendale
- Subjects
Secondary prevention ,medicine.medical_specialty ,Tele-rehabilitation ,business.industry ,Physical activity ,Physical therapy ,medicine ,patient-centered computing ,self-management ,patient motivation ,secondary prevention, tele-rehabilitation ,intelligibility ,self-awareness ,Intelligibility (communication) ,Patient motivation ,business - Abstract
Physical exercise is a primary component of cardiac rehabilitation. Interpreting exercise targets and being aware of the expected effort while rehabilitating independently at home is challenging for patients. Our work aims to assist cardiac patients in understanding their condition better and motivating them to achieve their exercise targets in a tele-rehabilitation setting. We introduce a patient-centric intelligible visualization approach to present prescribed rehabilitation targets to patients based on Metabolic Equivalent of Tasks (METs). We assessed efficacy of intelligible visualizations on patients’ comprehension in a lab study. We evaluated the impact on patient motivation and health outcomes in field studies. Patients were able to manage their prescribed activities, monitor their progress, and gained understanding on how their physical activities contribute to their rehabilitation targets. Patients confirmed that the intelligible visualizations motivated them to achieve their targets better. We observed an improvement in overall physical activity levels and health outcomes of patients.
- Published
- 2018
20. Familiarisation: Restructuring Layouts with Visual Learning Models
- Author
-
Kris Luyten, Jussi P. P. Jokinen, Antti Oulasvirta, and Kashyap Todi
- Subjects
Visual search ,ta113 ,Computational design ,Graphical layouts ,Restructuring ,Computer science ,Interface (computing) ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Human–computer interaction ,Adaptive user interfaces ,Human visual system model ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,visual search ,graphical layouts ,computational design ,adaptive user interfaces ,Visual learning ,050107 human factors - Abstract
In domains where users are exposed to large variations in visuo-spatial features among designs, they often spend excess time searching for common elements (features) in familiar locations. This paper contributes computational approaches to restructuring layouts such that features on a new, unvisited interface can be found quicker. We explore four concepts of familiarisation, inspired by the human visual system (HVS), to automatically generate a familiar design for each user. Given a history of previously visited interfaces, we restructure the spatial layout of the new (unseen) interface with the goal of making its elements more easily found. Familiariser is a browser-based implementation that automatically restructures webpage layouts based on the visual history of the user. Our evaluation with users provides first evidence favouring familiarisation. The project has partially received funding from the Academy of Finland project COMPUTED and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement 637991).
- Published
- 2018
21. Capturing Design Decision Rationale with Decision Cards
- Author
-
Karin Coninx, Mieke Haesen, Gustavo Rovelo, Marisela Gutierrez Lopez, Kris Luyten, Hasselt University (UHasselt), Regina Bernhaupt, Girish Dalvi, Anirudha Joshi, Devanuj K. Balkrishan, Jacki O'Neill, Marco Winckler, and TC 13
- Subjects
Decision support system ,Traceability ,Decision engineering ,Management science ,business.industry ,Computer science ,05 social sciences ,0211 other engineering and technologies ,Design rationale documentation ,02 engineering and technology ,R-CAST ,Business decision mapping ,Design process ,0501 psychology and cognitive sciences ,[INFO]Computer Science [cs] ,Software engineering ,business ,Engineering design process ,050107 human factors ,021106 design practice & management ,Decision analysis ,Decision-making - Abstract
Part 7: Design Rationale and Camera-Control; International audience; In the design process, designers make a wide variety of decisions that are essential to transform a design from a conceptual idea into a concrete solution. Recording and tracking design decisions, a first step to capturing the rationale of the design process, are tasks that until now are considered as cumbersome and too constraining. We used a holistic approach to design, deploy, and verify decision cards; a low threshold tool to capture, externalize, and contextualize design decisions during early stages of the design process. We evaluated the usefulness and validity of decision cards with both novice and expert designers. Our exploration results in valuable insights into how such decision cards are used, into the type of information that practitioners document as design decisions, and highlight the properties that make a recorded decision useful for supporting awareness and traceability on the design process.
- Published
- 2017
- Full Text
- View/download PDF
22. Introduction to the special issue on interaction with smart objects
- Author
-
Kris Luyten, Daniel Schreiber, Max Mühlhäuser, Oliver Brdiczka, and Melanie Hartman
- Subjects
Multimedia ,Smart objects ,Computer science ,computer.software_genre ,USable ,Object (computer science) ,Human-Computer Interaction ,Artificial Intelligence ,Human–computer interaction ,Information and Communications Technology ,Key (cryptography) ,Smart environment ,Projection (set theory) ,computer - Abstract
Smart objects can be smart because of the information and communication technology that is added to human-made artifacts. It is not, however, the technology itself that makes them smart but rather the way in which the technology is integrated, and their smartness surfaces through how people are able to interact with these objects. Hence, the key challenge for making smart objects successful is to design usable and useful interactions with them. We list five features that can contribute to the smartness of an object, and we discuss how smart objects can help resolve the simplicity-featurism paradox. We conclude by introducing the three articles in this special issue, which dive into various aspects of smart object interaction: augmenting objects with projection, service-oriented interaction with smart objects via a mobile portal, and an analysis of input-output relations in interaction with tangible smart objects.
- Published
- 2013
- Full Text
- View/download PDF
23. Coaching Compliance: A Tool for Personalized e-Coaching in Cardiac Rehabilitation
- Author
-
Mieke Haesen, Supraja Sankaran, Paul Dendale, Kris Luyten, Karin Coninx, Hasselt University (UHasselt), Regina Bernhaupt, Girish Dalvi, Anirudha Joshi, Devanuj K. Balkrishan, Jacki O’Neill, Marco Winckler, and TC 13
- Subjects
Remote patient monitoring ,Patient risk ,medicine.medical_treatment ,education ,030204 cardiovascular system & hematology ,Coaching ,Information recommendation ,Personalization ,Compliance (psychology) ,03 medical and health sciences ,0302 clinical medicine ,medicine ,[INFO]Computer Science [cs] ,Technology in healthcare ,HCI ,Remote rehabilitation ,Personalized coaching ,030212 general & internal medicine ,Technology in healthcare (primary keyword) ,Medical education ,Focus (computing) ,Rehabilitation ,business.industry ,E coaching ,3. Good health ,Psychology ,business - Abstract
Part 7: Demonstrations; International audience; Patient coaching is integral to cardiac rehabilitation programs to help patients understand, cope better with their condition and become active participants in their care. The introduction of remote patient monitoring technologies and tele-monitoring solutions have proven to be effective and paved way for novel remote rehabilitation approaches. Nonetheless, these solutions focus largely on monitoring patients without a specific focus on coaching patients. Additionally, these systems lack personalization and a deeper understanding of individual patient needs. In our demonstration, we present a tool to personalize e-coaching based on individual patient risk factors, adherence rates and personal preferences of patients using a tele-rehabilitation solution. We developed the tool after conducting a workshop and multiple brainstorms with various caregivers involved in coaching cardiac patients to connect their perspectives with patient needs. It was integrated into a comprehensive tele-rehabilitation application.
- Published
- 2017
- Full Text
- View/download PDF
24. Toward specifying Human-Robot Collaboration with composite events
- Author
-
Kris Luyten, Jan Van Den Bergh, Fredy Cuenca Lucero, and Karin Coninx
- Subjects
0209 industrial biotechnology ,Robot kinematics ,Sequence ,Computer science ,business.industry ,05 social sciences ,Robotics ,02 engineering and technology ,Human–robot interaction ,020901 industrial engineering & automation ,Human–computer interaction ,Robot ,0501 psychology and cognitive sciences ,Artificial intelligence ,business ,050107 human factors ,Abstraction (linguistics) - Abstract
Human-Robot Collaboration is increasingly considered in manufacturing to better combine the strengths of humans and robots. Establishing this human-robot collaboration may require multi-modal interaction; input to and output from the robot can both use multiple channels in sequence or in parallel. Designing effective interaction requires the expertise from different domains, possibly originating from people with different backgrounds. In our work we explore how composite events — hierarchical composition of events — can be used in a way that eases the communication within a multi-disciplinary team. In this paper, we present how the concept of composite events can be used to create different layers of abstraction that can be used to ease prototyping and discussion of human-robot collaboration with stakeholders through a supporting tool called Hasselt UIMS. At the lower level(s) of abstraction, the composite events can be mapped to the message-based communication as implemented in the Robotic Operating System (ROS), which is used to program collaborative robots, such as the Baxter robot from Rethink Robotics.
- Published
- 2016
- Full Text
- View/download PDF
25. A Unified Approach to Uncertainty-Aware Ubiquitous Localisation of Mobile Users
- Author
-
Karin Coninx, Kris Luyten, and Petr Aksenov
- Subjects
Ubiquitous computing ,General Computer Science ,Multimedia ,Computer science ,media_common.quotation_subject ,Location awareness ,Ontology (information science) ,computer.software_genre ,Preference ,Presentation ,Human–computer interaction ,Feature (computer vision) ,Context awareness ,computer ,media_common - Abstract
Localisation is a standard feature in many mobile applications today, and there are numerous techniques for determining a user’s location both indoors and outdoors. The provided location information is often organised in a format tailored to a particular localisation system’s needs and restrictions, making the use of several systems in one application cumbersome. The presented approach models the details of localisation systems and uses this model to create a unified view on localisation in which special attention is paid to uncertainty coming from different localisation conditions and to its presentation to the user. The work discusses technical considerations, challenges and issues of the approach, and reports on a user study on the acceptance of a mobile application’s behaviour reflecting the approach. The results of the study show the suitability of the approach and reveal users’ preference toward automatic and informed changes they experienced while using the application.
- Published
- 2011
- Full Text
- View/download PDF
26. Finding a needle in a haystack: an interactive video archive explorer for professional video searchers
- Author
-
Mieke Haesen, Jan Meskens, Kris Luyten, Karin Coninx, Jan Hendrik Becker, Tinne Tuytelaars, Gert-Jan Poulisse, Phi The Pham, and Marie-Francine Moens
- Subjects
Multimedia ,Interactive visualisations ,Computer Networks and Communications ,business.industry ,Computer science ,Interactive video ,Search engine indexing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Searching and browsing video archives ,Usability ,PSI_VISICS ,computer.software_genre ,Information filtering ,Task (project management) ,Hardware and Architecture ,Media Technology ,Haystack ,User-centred software engineering ,business ,computer ,Software - Abstract
Professional video searchers typically have to search for particular video fragments in a vast video archive that contains many hours of video data. Without having the right video archive exploration tools, this is a difficult and time consuming task that induces hours of video skimming. We propose the video archive explorer, a video exploration tool that provides visual representations of automatically detected concepts to facilitate individual and collaborative video search tasks. This video archive explorer is developed by employing a user-centred methodology, which ensures that the tool is more likely to fit to the end user needs. A qualitative evaluation with professional video searchers shows that the combination of automatic video indexing, interactive visualisations and user-centred design can result in an increased usability, user satisfaction and productivity. Haesen M., Meskens J., Luyten K., Coninx K., Becker J.H., Tuytelaars T., Poulisse G.-J., Pham The P., Moens M.-F., ''Finding a needle in a haystack: an interactive video archive explorer for professional video searchers'', Multimedia tools and applications, vol. 63, no. 2, pp. 331-356, 2013. ispartof: Multimedia Tools and Applications vol:63 issue:2 pages:331-356 status: published
- Published
- 2011
- Full Text
- View/download PDF
27. Exploring the Role of Artefacts to Coordinate Design Meetings
- Author
-
Kris Luyten, Karin Coninx, Davy Vanacken, and Marisela Gutierrez Lopez
- Subjects
design process ,design artefacts ,multidisciplinary teams ,case study ,Process management ,Process (engineering) ,Computer science ,Multidisciplinary approach ,Design process ,Common ground ,Relevance (information retrieval) ,Engineering design process ,Cognitive ergonomics ,Variety (cybernetics) - Abstract
Design artefacts are vital to communicate design outcomes, both in remote and co-located settings. However, it is unclear how artefacts are used to mediate interactions between designers and stakeholders of the design process. The purpose of this paper is exploring how professional design teams use artefacts to guide and capture discussions involving multidisciplinary stakeholders while they work in a co-located setting. An earlier draft of this paper was paper published in the Proceedings of the European Conference on Cognitive Ergonomics (ECCE 2017). This work adds substantial clarification of the methodology followed, further details and photographs of the case studies, and an extended discussion about our findings and their relevance for designing interactive systems. We report the observations of six design meetings in three different projects, involving professional design teams that follow a user-centered design approach. Meetings with stakeholders are instrumental for design projects. However, design teams face the challenge of synthesizing large amounts of information, often in a limited time, and with minimal common ground between meeting attendees. We found that all the observed design meetings had a similar structure consisting of a series of particular phases, in which design activities were organized around artefacts. These artefacts were used as input to disseminate and gather feedback of previous design outcomes, or as output to collect and process a variety of perspectives. We discuss the challenges faced by design teams during design meetings, and propose three design directions for interactive systems to coordinate design meetings revolving around artefacts The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement n° 610725 (COnCEPT project)
- Published
- 2018
- Full Text
- View/download PDF
28. A Grounded Approach for Applying Behavior Change Techniques in Mobile Cardiac Tele-Rehabilitation
- Author
-
Kris Luyten, Paul Dendale, Ines Frederix, Supraja Sankaran, Karin Coninx, and Mieke Haesen
- Subjects
Persuasion ,Activities of daily living ,Process management ,020205 medical informatics ,business.industry ,Computer science ,media_common.quotation_subject ,Behavior change ,020207 software engineering ,Behavior change methods ,02 engineering and technology ,User experience design ,Multidisciplinary approach ,Human–computer interaction ,Software design pattern ,0202 electrical engineering, electronic engineering, information engineering ,sense organs ,business ,Set (psychology) ,media_common - Abstract
In mobile tele-rehabilitation applications for Coronary Artery Disease (CAD) patients, behavior change plays a central role in influencing better therapy adherence and prevention of disease recurrence. However, creating sustainable behavior change that holds a beneficial impact over a prolonged period of time remains an important challenge. In this paper we discuss various models and frameworks related to persuasion and behavior change, and investigate how to incorporate these with a multidisciplinary user-centered design approach for creating a mobile tele-rehabilitation application. By implementing different concepts that contribute to behavior change and applying a set of distinct persuasive design patterns, we were able to translate the high-level goals of behavior theory into a mobile application that explicitly incorporates behavior change techniques and also offers a good overall user experience. We evaluated our system, HeartHab, in a lab setting and show that our approach leads to a high user acceptance and willingness to use the system in daily activities.
- Published
- 2016
29. Gestu-Wan - An Intelligible Mid-Air Gesture Guidance System for Walk-up-and-Use Displays
- Author
-
Gustavo Rovelo, Davy Vanacken, Karin Coninx, Kris Luyten, Donald Degraen, Hasselt University (UHasselt), and TC 13
- Subjects
Gesture guide ,Human–computer interaction ,Computer science ,business.industry ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Visibility (geometry) ,[INFO]Computer Science [cs] ,Artificial intelligence ,Guidance system ,business ,Mid-air gestures ,Walk-up-and-use ,Gesture - Abstract
International audience; We present Gestu-Wan, an intelligible gesture guidance system designed to support mid-air gesture-based interaction for walk-up-and-use displays. Although gesture-based interfaces have become more prevalent, there is currently very little uniformity with regard to gesture sets and the way gestures can be executed. This leads to confusion, bad user experiences and users who rather avoid than engage in interaction using mid-air gesturing. Our approach improves the visibility of gesture-based interfaces and facilitates execution of mid-air gestures without prior training. We compare Gestu-Wan with a static gesture guide, which shows that it can help users with both performing complex gestures as well as understanding how the gesture recognizer works.
- Published
- 2015
- Full Text
- View/download PDF
30. Proxemic Flow: Dynamic Peripheral Floor Visualizations for Revealing and Mediating Large Surface Interactions
- Author
-
Kris Luyten, Jon Bird, Jo Vermeulen, Nicolai Marquardt, Karin Coninx, Abascal, J, Barbosa, S, Fetter, M, Gross, T, Palanque, P, Winckler, M, Abascal, Julio, Barbosa, Simone, Fetter, Mirko, Gross, Tom, Palanque, Philippe, Winckler, Marco, University of Birmingham [Birmingham], Hasselt University (UHasselt), iMinds, Catholic University of Leuven - Katholieke Universiteit Leuven (KU Leuven), University College of London [London] (UCL), City University London, and TC 13
- Subjects
QA75 ,feedback ,proxemic interactions ,implicit interaction ,discoverability ,intelligibility ,spatial feedback ,Discoverability ,Computer science ,media_common.quotation_subject ,05 social sciences ,Spatial feedback ,Fidelity ,020207 software engineering ,02 engineering and technology ,Public displays ,Implicit interaction ,Feedback ,Proxemics ,Proxemic interactions ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Information system ,0501 psychology and cognitive sciences ,[INFO]Computer Science [cs] ,Intelligibility ,050107 human factors ,media_common - Abstract
International audience; Interactive large surfaces have recently become commonplace for interactions in public settings. The fact that people can engage with them and the spectrum of possible interactions, however, often remain invisible and can be confusing or ambiguous to passersby. In this paper, we explore the design of dynamic peripheral floor visualizations for revealing and mediating large surface interactions. Extending earlier work on interactive illuminated floors, we introduce a novel approach for leveraging floor displays in a secondary, assisting role to aid users in interacting with the primary display. We illustrate a series of visualizations with the illuminated floor of the Proxemic Flow system. In particular, we contribute a design space for peripheral floor visualizations that (a) provides peripheral information about tracking fidelity with personal halos, (b) makes interaction zones and borders explicit for easy opt-in and opt-out, and (c) gives cues inviting for spatial movement or possible next interaction steps through wave, trail, and footstep animations. We demonstrate our proposed techniques in the context of a large surface application and discuss important design considerations for assistive floor visualizations.
- Published
- 2015
- Full Text
- View/download PDF
31. SmartObjects
- Author
-
Dirk Schnelle-Walka, Max Mühlhäuser, Stefan Radomski, Oliver Brdiczka, Jochen Huber, Kris Luyten, and Tobias Grosse-Puppendahl
- Abstract
The increasing number of smart objects in our everyday life shapes how we interact beyond the desktop. In this workshop we discussed how the interaction with these smart objects should be designed from various perspectives. This year’s workshop put a special focus on affective computing with smart objects, as reflected by the keynote talk.
- Published
- 2015
- Full Text
- View/download PDF
32. Designing distributed user interfaces for ambient intelligent environments using models and simulations
- Author
-
Chris Vandervelpen, Kris Luyten, Jan Van den Bergh, and Karin Coninx
- Subjects
Interactive systems engineering ,Focus (computing) ,business.industry ,Natural user interface ,Computer science ,User modeling ,General Engineering ,Computer Graphics and Computer-Aided Design ,User interface design ,Human-Computer Interaction ,User experience design ,Human–computer interaction ,Model-driven architecture ,User interface ,business ,computer ,computer.programming_language - Abstract
There is a growing demand for design support to create interactive systems that are deployed in ambient intelligent environments. Unlike traditional interactive systems, the wide diversity of situations these type of user interfaces need to work in require tool support that is close to the environment of the end-user on the one hand and provide a smooth integration with the application logic on the other hand. This paper shows how the model-based user interface development methodology can be applied for ambient intelligent environments; we propose a task-centered approach towards the design of interactive systems by means of appropriate visualizations and simulations of different models. Besides the use of these typical user interface models such as the task- and presentation-model to support interface design, we focus on user interfaces supporting situated task distributions and a visualization of context influences on deployed, possibly distributed, user interfaces. To enable this we introduce an environment model describing the device configuration at particular moment in time. To support the user interface designer while creating these complex interfaces for ambient intelligent environments, we discuss tool support using a visualization of the environment together with simulations of the user interface configurations. We also show how the concepts presented in the paper can be integrated within model-driven engineering, hereby narrowing the gap between HCI design and software engineering.
- Published
- 2006
- Full Text
- View/download PDF
33. Runtime transformations for modal independent user interface migration
- Author
-
Kris Luyten, Frank Van Reeth, Tom Van Laerhoven, and Karin Coninx
- Subjects
Natural language user interface ,Natural user interface ,business.industry ,Computer science ,Software development ,XSLT ,Interface description language ,computer.software_genre ,User interface design ,Human-Computer Interaction ,Human–computer interaction ,Virtual machine ,User interface ,business ,computer ,Software ,computer.programming_language - Abstract
The usage of computing systems has evolved dramatically over the last years. Starting from a low level procedural usage, in which a process for executing one or several tasks is carried out, computers now tend to be used in a problem oriented way. Future computer usage will be more centered around particular services, and will not be focused on platforms or applications. These services should be independent of the technology used to interact with them. In this paper an approach will be presented which provides a uniform interface to such services, without any dependence on modality, platform or programming language. Through the usage of general user interface descriptions, presented in XML, and converted using XSLT, a uniform framework is presented for runtime migration of user interfaces. As a consequence, future services will become easily extensible for all kinds of devices and modalities. Special attention goes out to a component-based software development approach. Services represented by and grouped in components can offer a special interface for modal- and device-independent rendering. Components become responsible for describing their own possibilities and constraints for interacting. An implementation serving as a proof of concept, a runtime conversion of a joystick in a 3D virtual environment into a 2D dialog-based user interface, is developed.
- Published
- 2003
- Full Text
- View/download PDF
34. PaperPulse: an integrated approach for embedding electronics in paper designs
- Author
-
Kris Luyten, Kashyap Todi, and Raf Ramakers
- Subjects
Microcontroller ,Engineering drawing ,Fabrication ,Human–computer interaction ,Computer science ,Computer graphics (images) ,Design tool ,Code (cryptography) ,Embedding ,Artifact (software development) ,Electronics ,Integrated approach ,Electronic circuit - Abstract
We present PaperPulse, a design and fabrication approach that enables designers without a technical background to produce standalone interactive paper artifacts by augmenting them with electronics. With PaperPulse, designers overlay pre-designed visual elements with widgets available in our design tool. PaperPulse provides designers with three families of widgets designed for smooth integration with paper, for an overall of 20 different interactive components. We also contribute a logic demonstration and recording approach, Pulsation, that allows for specifying functional relationships between widgets. Using the final design and the recorded Pulsation logic, PaperPulse generates layered electronic circuit designs, and code that can be deployed on a microcontroller. By following automatically generated assembly instructions, designers can seamlessly integrate the microcontroller and widgets in the final paper artifact.
- Published
- 2015
35. A user study for comparing the programming efficiency of modifying executable multimodal interaction descriptions. A domain-specific language versus equivalent event-callback code
- Author
-
Jan Van den Bergh, Kris Luyten, Fredy Cuenca, and Karin Coninx
- Subjects
Computer science ,Programming language ,multimodal systems ,domain-specific languages ,declarative languages ,composite events ,Natural language programming ,computer.software_genre ,Language primitive ,High-level programming language ,Human–computer interaction ,Programming paradigm ,Fourth-generation programming language ,Programming domain ,First-generation programming language ,Low-level programming language ,computer - Abstract
The present paper describes an empirical user study intended to compare the programming efficiency of our proposed domain-specific language versus a mainstream event language when it comes to modify multimodal interactions. By concerted use of observations, interviews, and standardized questionnaires, we managed to measure the completion rates, completion time, code testing effort, and perceived difficulty of the programming tasks along with the perceived usability and perceived learnability of the tool supporting our proposed language. Based on this experience, we propose some guidelines for designing comparative user studies of programming languages. The paper also discusses the considerations we took into account when designing a multimodal interaction description language that intends to be well regarded by its users.
- Published
- 2015
36. Augmenting Social Interactions: Realtime Behavioural Feedback using Social Signal Processing Techniques
- Author
-
Kris Luyten, Ionut Damian, Tobias Baur, Johannes Schöning, Elisabeth André, and Chiew Seng Sean Tan
- Subjects
Signal processing ,Computer science ,business.industry ,Energy (esotericism) ,media_common.quotation_subject ,computer-enhanced interaction ,behaviour analysis ,peripheral feedback ,social signal processing ,Nonverbal communication ,Public speaking ,Presentation ,Human–computer interaction ,Component (UML) ,Openness to experience ,ddc:004 ,business ,Wearable technology ,media_common - Abstract
Nonverbal and unconscious behaviour is an important component of daily human-human interaction. This is especially true in situations such as public speaking, job interviews or information sensitive conversations, where researchers have shown that an increased awareness of one's behaviour can improve the outcome of the interaction. With wearable technology, such as Google Glass, we now have the opportunity to augment social interactions and provide realtime feedback on one's behaviour in an unobtrusive way. In this paper we present Logue, a system that provides realtime feedback on the presenters' openness, body energy and speech rate during public speaking. The system analyses the user's nonverbal behaviour using social signal processing techniques and gives visual feedback on a head-mounted display. We conducted two user studies with a staged and a real presentation scenario which yielded that Logue's feedback was perceived helpful and had a positive impact on the speaker's performance.
- Published
- 2015
37. [Poster] Exploring social augmentation concepts for public speaking using peripheral feedback and real-time behavior analysis
- Author
-
Elisabeth André, Kris Luyten, Ionut Damian, Tobias Baur, Johannes Schöning, and Chiew Seng Sean Tan
- Subjects
Etiquette ,Public speaking ,Enthusiasm ,Unconscious mind ,Modalities ,Computer science ,Human–computer interaction ,media_common.quotation_subject ,Wearable computer ,Augmented reality ,ddc:004 ,Task (project management) ,media_common - Abstract
Non-verbal and unconscious behavior plays an important role for efficient human-to-human communication but are often undervalued when training people to become better communicators. This is particularly true for public speakers who need not only behave according to a social etiquette but do so while generating enthusiasm and interest for dozens if not hundreds of other persons. In this paper we propose the concept of social augmentation using wearable computing with the goal of giving users the ability to continuously monitor their performance as a communicator. To this end we explore interaction modalities and feedback mechanisms which would lend themselves to this task.
- Published
- 2014
- Full Text
- View/download PDF
38. The EICS 2014 Doctoral Consortium
- Author
-
Kris Luyten, Laurence Nigay, Ingénierie de l’Interaction Homme-Machine (IIHM), Laboratoire d'Informatique de Grenoble (LIG), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF), Expertise Centre for Digital Media, and Hasselt University (UHasselt)
- Subjects
Interactive computing ,Work (electrical) ,ComputingMilieux_THECOMPUTINGPROFESSION ,Event (computing) ,Computer science ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Data science ,ComputingMilieux_MISCELLANEOUS - Abstract
International audience; In this short extended abstract, we present the doctoral consortium of the Engineering Interactive Computing Systems (EICS) 2014 Symposium. Our goal is to make the doctoral consortium a useful event with a maximum benefit for the participants by having a dedicated event the day before the conference as well as the opportunity to present their on-going doctoral work to a wider audience during the conference.
- Published
- 2014
- Full Text
- View/download PDF
39. Investigating the effects of using biofeedback as visual stress indicator during video-mediated collaboration
- Author
-
Chiew Seng Sean Tan, Kris Luyten, Karin Coninx, and Johannes Schöning
- Subjects
Multimedia ,Interface (computing) ,medicine.medical_treatment ,media_common.quotation_subject ,education ,Applied psychology ,Workload ,Stress indicator ,computer.software_genre ,Biofeedback ,behavioral disciplines and activities ,Task (project management) ,H.5.m. Information Interfaces and Presentation (e.g. HCI): user interfaces ,interaction style ,CSCW ,biofeedback ,stress ,video-mediated collaboration ,Computer-supported cooperative work ,Stress (linguistics) ,medicine ,Worry ,Psychology ,computer ,psychological phenomena and processes ,media_common - Abstract
During remote video-mediated assistance, instructors often guide workers through problems and instruct them to perform unfamiliar or complex operations. However, the workers’ performance might deteriorate due to stress. We argue that informing biofeedback to the instructor, can improve communication and lead to lower stress. This paper presents a thorough investigation on mental workload and stress perceived by twenty participants, paired up in an instructor-worker scenario, performing remote video-mediated tasks. The interface conditions differ in task, facial and biofeedback communication. Two self-report measures are used to assess mental workload and stress. Results show that pairs reported lower mental workload and stress when instructors are using the biofeedback as compared to using interfaces with facial view. Significant correlations were found on task performance with reducing stress (i.e. increased task engagement and decreased worry) for instructors and declining mental workload (i.e. increased performance) for workers. Our findings provide insights to advance video-mediated interfaces for remote collaborative work.
- Published
- 2014
- Full Text
- View/download PDF
40. Paddle
- Author
-
Johannes Schöning, Kris Luyten, and Raf Ramakers
- Subjects
Form factor (design) ,Flexibility (engineering) ,Projector ,law ,Human–computer interaction ,Computer science ,deformable interfaces ,tangible interfaces ,mobile devices ,Magic (programming) ,Scroll ,Paddle ,Affordance ,Mobile device ,law.invention - Abstract
Paddle is a highly deformable mobile device that leverages engineering principles from the design of the Rubik's Magic, a folding plate puzzle. The various transformations supported by Paddle bridges the gap between differently sized mobile devices available nowadays, such as phones, armbands, tablets and game controllers. Besides this, Paddle can be transformed to different physical controls in only a few steps, such as peeking options, a ring to scroll through lists and a book-like form factor to leaf through pages. These special-purpose physical controls have the advantage of providing clear physical affordances and exploiting people's innate abilities for manipulating objects in the real world. We investigated the benefits of these interaction techniques in detail in [1]. In contrast to traditional touch screens, physical controls are usually less flexible and therefore less suitable for mobile settings. Paddle, shows how mobile devices can be designed to bring physical controls to mobile devices and thus combine the flexibility of touch screens with the physical qualities that real world controls provide. Our current prototype is tracked with an optical tracking system and uses a projector to provide visual output. In the future, we envision devices similar to Paddle that are entirely self-contained, using tiny integrated displays.
- Published
- 2014
- Full Text
- View/download PDF
41. A Domain-Specific Textual Language for Rapid Prototyping of Multimodal Interactive Systems
- Author
-
Jan Van den Bergh, Fredy Cuenca, Karin Coninx, and Kris Luyten
- Subjects
Rapid prototyping ,Event (computing) ,Programming language ,Computer science ,A domain ,Executable ,computer.file_format ,computer.software_genre ,Notation ,computer ,Range (computer programming) ,multimodal systems ,composite events ,declarative languages - Abstract
There are currently toolkits that allow the specification of executable multimodal human-machine interaction models. Some provide domain-specific visual languages with which a broad range of interactions can be modeled but at the expense of bulky diagrams. Others instead, interpret concise specifications written in existing textual languages even though their non-specialized notations prevent the productivity improvement achievable through domain-specific ones. We propose a domain-specific textual language and its supporting toolkit; they both overcome the shortcomings of the existing approaches while retaining their strengths. The language provides notations and constructs specially tailored to compactly declare the event patterns raised during the execution of multimodal commands. The toolkit detects the occurrence of these patterns and invokes the functionality of a back-end system in response.
- Published
- 2014
42. Raising Awareness on Smartphone Privacy Issues with SASQUATCH, and solving them with PrivacyPolice
- Author
-
Peter Quax, Kris Luyten, Wim Lamotte, and Bram Bonné
- Subjects
business.industry ,Computer science ,Visitor pattern ,Internet privacy ,Smartphone application ,Computer security ,computer.software_genre ,Raising (linguistics) ,Popular media ,The Internet ,business ,Set (psychology) ,Personally identifiable information ,computer ,Private information retrieval - Abstract
Smartphones leak personal information about their owner when they use it to connect to the Internet. Despite recent coverage of these issues in popular media, raising awareness remains problematic since it remains largely invisible to the users. We designed a system, SASQUATCH, consisting of a network scanner and a public display, to draw the visitor's attention and inform them about these issues. SASQUATCH first gathers private information about previous whereabouts, and then shows an anonymized version of this data on the public display to draw the visitor's attention. Next, SASQUATCH offers an interactive component that allows people to view the information their own smartphone is leaking in private, and then provides solutions (including a fully-automated smartphone application) for securing against future privacy leaks. A set of initial field trails has shown that SASQUATCH is highly effective in raising awareness.
- Published
- 2014
- Full Text
- View/download PDF
43. Timisto
- Author
-
Mieke Haesen, Kris Luyten, Karin Coninx, Joël Vogt, and Andreas Meier
- Subjects
Pluralistic walkthrough ,Multimedia ,Computer science ,Process (engineering) ,media_common.quotation_subject ,Context (language use) ,Creativity ,Notation ,computer.software_genre ,Human–computer interaction ,ComputerApplications_MISCELLANEOUS ,Storyboard ,Design methods ,Engineering design process ,computer ,media_common - Abstract
Storyboarding is a technique that is often used for the conception of new interactive systems. A storyboard illustrates graphically how a system is used by its users and what a typical context of usage is. Although the informal notation of a storyboard stimulates creativity, and makes them easy to understand for everyone, it is more difficult to integrate in further steps in the engineering process. We present an approach, "Time In Storyboards" (Timisto), to extract valuable information on how various interactions with the system are positioned in time with respect to each other. Timisto does not interfere with the creative process of storyboarding, but maximizes the structured information about time that can be deduced from a storyboard. Design Methods;Analysis Methods
- Published
- 2013
- Full Text
- View/download PDF
44. Crossing the bridge over Norman's gulf of execution : revealing feedforward's true identity
- Author
-
Karin Coninx, Kris Luyten, Jo Vermeulen, and Elise van den Hoven
- Subjects
ComputingMethodologies_PATTERNRECOGNITION ,business.industry ,Computer science ,affordances, design, feedback, feedforward, theory ,Feed forward ,Artificial intelligence ,ComputerSystemsOrganization_PROCESSORARCHITECTURES ,business ,Gulf of execution - Abstract
Feedback and affordances are two of the most well-known principles in interaction design. Unfortunately, the related and equally important notion of feedforward has not been given as much consideration. Nevertheless, feedforward is a powerful design principle for bridging Norman's Gulf of Execution. We reframe feedforward by disambiguating it from related design principles such as feedback and perceived affordances, and identify new classes of feedforward. In addition, we present a reference framework that provides a means for designers to explore and recognize different opportunities for feedforward. Copyright © 2013 ACM.
- Published
- 2013
- Full Text
- View/download PDF
45. Activity-centric Support for Ad Hoc Knowledge Work: A Case Study of Co-activity Manager
- Author
-
Kris Luyten, Jo Vermeulen, Steven Houben, Karin Coninx, and Jakob E. Bardram
- Subjects
activity theory, activity-centric computing, collaborative work, desktop interface ,Knowledge management ,Interface (Java) ,Team software process ,business.industry ,Process (engineering) ,Computer science ,Context (language use) ,Activity theory ,activity theory ,desktop interface ,activity-centric computing ,collaborative work ,Work (electrical) ,Human–computer interaction ,Software deployment ,business - Abstract
Modern knowledge work consists of both individual and highly collaborative activities that are typically composed of a number of configuration, coordination and articulation processes. The desktop interface today, however, provides very little support for these processes and rather forces knowledge workers to adapt to the technology. We introduce co-Activity Manager, an activity-centric desktop system that (i) provides tools for ad hoc dynamic configuration of a desktop working context, (ii) supports both explicit and implicit articulation of ongoing work through a built-in collaboration manager and (iii) provides the means to coordinate and share working context with other users and devices. In this paper, we discuss the activity theory informed design of co-Activity Manager and report on a 14 day field deployment in a multi-disciplinary software development team. The study showed that the activity-centric workspace supports different individual and collaborative work configuration practices and that activity-centric collaboration is a two-phase process consisting of an activity sharing and per-activity coordination phase.
- Published
- 2013
- Full Text
- View/download PDF
46. Liftacube: a prototype for pervasive rehabilitation in a residential setting
- Author
-
Marijke Vandermaesen, Karin Coninx, Richard P. J. Geers, Tom De Weyer, and Kris Luyten
- Subjects
Residential environment ,medicine.medical_specialty ,Pervasive gaming ,Activities of daily living ,Rehabilitation ,Computer science ,medicine.medical_treatment ,Motor training, neurorehabilitation, physical therapy, per- vasive healthcare, residential environment, upper extremity ,Neurological disorder ,medicine.disease ,Physical medicine and rehabilitation ,medicine ,Paraplegia ,Spinal cord injury ,Neurorehabilitation - Abstract
Persons with neurological disorders or spinal cord injuries, such as Cerebrovascular Accident (CVA) or Paraplegia pa- tients, experience signi cantly reduced physical abilities dur- ing their activities of daily living. By frequent and intense physical therapy, these patients can sustain or even enhance their functional performance. However, physical therapy, whether or not it is supported by technology, can currently only be followed in a rehabilitation centre under supervi- sion of a therapist. To provide technology-supported physi- cal therapy for independent use by the patient in the home situation, our current research explores pervasive technolo- gies for rehabilitation systems. In this paper, we describe our pervasive prototype 'Liftacube' for training of upper ex- tremities. An initial evaluation with patients with a neuro- logical disorder or spinal cord injury (CVA and paraplegia patients) and their therapists reveals a great appreciation for this motivating pervasive gaming prototype. Re ections on the technical set-up (such as size, form factor, and mate- rials) and interaction preferences (such as feedback, games, and movements for interaction) for pervasive rehabilitation systems in a residential environment are elaborated upon.
- Published
- 2013
47. Assessing the support provided by a toolkit for rapid prototyping of multimodal systems
- Author
-
Davy Vanacken, Fredy Cuenca, Kris Luyten, and Karin Coninx
- Subjects
Rapid prototyping ,Domain-specific language ,Source code ,Modalities ,Multimodal systems ,User interface toolkits ,Visual languages ,Domain specific languages ,Computer science ,Interface (Java) ,Human–computer interaction ,Scale (chemistry) ,media_common.quotation_subject ,Multimodal interaction ,media_common ,Task (project management) - Abstract
Choosing an appropriate toolkit for creating a multimodal interface is a cumbersome task. Several specialized toolkits include fusion and fission engines that allow developers to combine and decompose modalities to capture multimodal input and provide multimodal output. Unfortunately, the extent to which these toolkits can facilitate the creation of a multimodal interface is hard or impossible to estimate, due to the absence of a scale where the toolkit's capabilities can be measured on. In this paper, we propose a measurement scale, which allows the assessment of specialized toolkits without need for time-consuming testing or source code analysis. This scale is used to measure and compare the capabilities of three toolkits: CoGenIVE, HephaisTK and ICon.
- Published
- 2013
48. Bro-cam: Improving Game Experience with Empathic Feedback Using Posture Tracking
- Author
-
Johannes Schöning, Karin Coninx, Kris Luyten, Jan Schneider Barnes, and Chiew Seng Sean Tan
- Subjects
Matching (statistics) ,Extraversion and introversion ,media_common.quotation_subject ,ComputingMilieux_PERSONALCOMPUTING ,Mood ,Personality type ,Human–computer interaction ,Openness to experience ,Personality ,Tracking (education) ,Psychology ,Social psychology ,User feedback ,Personalized Feedback ,Posture Recognition ,Persuasive Computing ,Game experience ,media_common - Abstract
In todays videogames user feedback is often provided through raw sta-tistics and scoreboards. We envision that incorporating empathic feedback matching the player’s current mood will improve the overall gaming experience. In this paper we present Bro-cam, a novel system that provides empathic feedback to the player based on their body postures. Different body postures of the players are used as an indicator for their openness. From their level of openness, Bro-cam profiles the play-ers into different personality types ranging from introvert to extrovert. Empathic feedback is then automatically generated and matched to their preferences for certain humoristic feedback statements. We use a depth camera to track the player’s body postures and movements during the game and analyze these to provide customized feedback. We conducted a user study involving 32 players to investigate their subjec-tive assessment on the empathic game feedback. Semi-structured interviews reveal that participants were positive about the empathic feedback and Bro-cam significantly improves their game experience.
- Published
- 2013
- Full Text
- View/download PDF
49. Informing Intelligent User Interfaces by Inferring Affective States from Body Postures in Ubiquitous Computing Environments
- Author
-
Johannes Schöning, Karin Coninx, Kris Luyten, and Chiew Seng Sean Tan
- Subjects
Social Behavior ,Emotion Recognition ,Ubicomp ,Intelligent User Interfaces ,Ubiquitous computing ,Association rule learning ,Human–computer interaction ,Computer science ,Instrumentation (computer programming) ,State (computer science) ,User interface ,Set (psychology) ,Affective computing ,Implementation - Abstract
Intelligent User Interfaces can benefit from having knowledge on the user's emotion. However, current implementations to detect affective states, are often constraining the user's freedom of movement by instrumenting her with sensors. This prevents affective computing from being deployed in naturalistic and ubiquitous computing contexts. In this paper, we present a novel system called mASqUE, which uses a set of association rules to infer someone's affective state from their body postures. This is done without any user instrumentation and using off-the-shelf and non-expensive commodity hardware: a depth camera tracks the body posture of the users and their postures are also used as an indicator of their openness. By combining the posture information with physiological sensors measurements we were able to mine a set of association rules relating postures to affective states. We demonstrate the possibility of inferring affective states from body postures in ubiquitous computing environments and our study also provides insights how this opens up new possibilities for IUI to access the affective states of users from body postures in a nonintrusive way.
- Published
- 2013
50. Informing the design of situated glyphs for a care facility
- Author
-
Fahim Kawsar, Adalberto L. Simeone, Karin Coninx, Jo Vermeulen, Kris Luyten, and Gerd Kortuem
- Subjects
Ubiquitous computing ,Human–computer interaction ,business.industry ,Information space ,Computer science ,Specific-information ,Health care ,Situated ,Information flow (information theory) ,Glyph ,business ,Qualitative research - Abstract
Informing caregivers by providing them with contextual medical information can significantly improve the quality of patient care activities. However, information flow in hospitals is still tied to traditional manual or digitised lengthy patient record files that are often not accessible while caregivers are attending to patients. Leveraging the proliferation of pervasive awareness technologies (sensors, actuators and mobile displays), recent studies have explored this information presentation aspect borrowing theories from context-aware computing, i.e., presenting subtle information contextually to support the activity at hand. However, the understanding of the information space (i.e., what information should be presented) is still fairly abstruse, which inhibits the deployment of such real-time activity support systems. To this end, this paper first presents situated glyphs, a graphical entity to encode situation specific information, and then presents our findings from an in-situ qualitative study addressing the information space tailored to such glyphs. Applying technology probes using situated glyphs and different glyph display form factors, the study aimed at uncovering the information space pertained to both primary and secondary medical care. Our analysis has resulted in a large set of information types as well as given us deeper insight on the principles for designing future situated glyphs. We report our findings in this paper that we expect would provide a solid foundation for designing future assistive systems to support patient care activities.
- Published
- 2012
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.