82 results on '"Michel Pahud"'
Search Results
52. Experimental Study of Stroke Shortcuts for a Touchscreen Keyboard with Gesture-Redundant Keys Removed
- Author
-
Michel Pahud, Ahmed Sabbir Arif, Ken Hinckley, and Bill Buxton
- Subjects
Space (punctuation) ,Alphanumeric ,Computer science ,business.industry ,Speech recognition ,Word error rate ,law.invention ,Touchscreen ,law ,Text entry ,Typing ,Input method ,business ,Computer hardware ,Gesture - Abstract
We present experimental results for two-handed typing on a graphical Qwerty keyboard augmented with linear strokes for Space, Backspace, Shift, and Enter---that is, swipes to the right, left, up, and diagonally down-left, respectively. A first study reveals that users are more likely to adopt these strokes, and type faster, when the keys corresponding to the strokes are removed from the keyboard, as compared to an equivalent stroke-augmented keyboard with the keys intact. A second experiment shows that the keys-removed design yields 16% faster text entry than a standard graphical keyboard for phrases containing mixed-case alphanumeric and special symbols, without increasing error rate. Furthermore, the design is easy to learn: users exhibited performance gains almost immediately, and 90% of test users indicated they would want to use it as their primary input method.
- Published
- 2020
53. SurfaceFleet: Exploring Distributed Interactions Unbounded from Device, Application, User, and Time
- Author
-
Christian Holz, David Ledo, Xiaokuan Zhang, Hemant Bhaskar Surale, Nathalie Henry Riche, Anand Waghmare, Frederik Brudy, Jonathan Goldstein, Marcus Peinado, Michel Pahud, Umar Farooq Minhas, William A. S. Buxton, Shannon Joyner, Badrish Chandramouli, and Ken Hinckley
- Subjects
Computer science ,business.industry ,Interface (Java) ,Status quo ,Distributed computing ,media_common.quotation_subject ,05 social sciences ,Cloud computing ,02 engineering and technology ,Drag and drop ,computer.software_genre ,Moment (mathematics) ,Workflow ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Reference implementation ,business ,Java applet ,computer ,050107 human factors ,media_common - Abstract
Knowledge work increasingly spans multiple computing surfaces. Yet in status quo user experiences, content as well as tools, behaviors, and workflows are largely bound to the current device-running the current application, for the current user, and at the current moment in time. SurfaceFleet is a system and toolkit that uses resilient distributed programming techniques to explore cross-device interactions that are unbounded in these four dimensions of device, application, user, and time. As a reference implementation, we describe an interface built using SurfaceFleet that employs lightweight, semi-transparent UI elements known as Applets. Applets appear always-on-top of the operating system, application windows, and (conceptually) above the device itself. But all connections and synchronized data are virtualized and made resilient through the cloud. For example, a sharing Applet known as a Portfolio allows a user to drag and drop unbound Interaction Promises into a document. Such promises can then be fulfilled with content asynchronously, at a later time (or multiple times), from another device, and by the same or a different user.
- Published
- 2020
54. Pen-based Interaction with Spreadsheets in Mobile Virtual Reality
- Author
-
Daniel Schneider, Michel Pahud, Eyal Ofek, Per Ola Kristensson, Travis Gesslein, Verena Biener, Philipp Gagel, and Jens Grubert
- Subjects
FOS: Computer and information sciences ,business.industry ,Computer science ,media_common.quotation_subject ,05 social sciences ,cs.HC ,I.3.7 ,Mobile computing ,Computer Science - Human-Computer Interaction ,020207 software engineering ,02 engineering and technology ,Virtual reality ,Human-Computer Interaction (cs.HC) ,Visualization ,Data visualization ,Debugging ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Augmented reality ,User interface ,business ,Mobile device ,050107 human factors ,media_common - Abstract
Virtual Reality (VR) can enhance the display and interaction of mobile knowledge work and in particular, spreadsheet applications. While spreadsheets are widely used yet are challenging to interact with, especially on mobile devices, using them in VR has not been explored in depth. A special uniqueness of the domain is the contrast between the immersive and large display space afforded by VR, contrasted by the very limited interaction space that may be afforded for the information worker on the go, such as an airplane seat or a small work-space. To close this gap, we present a tool-set for enhancing spreadsheet interaction on tablets using immersive VR headsets and pen-based input. This combination opens up many possibilities for enhancing the productivity for spreadsheet interaction. We propose to use the space around and in front of the tablet for enhanced visualization of spreadsheet data and meta-data. For example, extending sheet display beyond the bounds of the physical screen, or easier debugging by uncovering hidden dependencies between sheet's cells. Combining the precise on-screen input of a pen with spatial sensing around the tablet, we propose tools for the efficient creation and editing of spreadsheets functions such as off-the-screen layered menus, visualization of sheets dependencies, and gaze-and-touch-based switching between spreadsheet tabs. We study the feasibility of the proposed tool-set using a video-based online survey and an expert-based assessment of indicative human performance potential., 10 pages, 11 figures, ISMAR 2020
- Published
- 2020
- Full Text
- View/download PDF
55. Accuracy of Commodity Finger Tracking Systems for Virtual Reality Head-Mounted Displays
- Author
-
Daniel Schneider, Jens Grubert, Alexander Martschenko, Axel Simon Kublin, Michel Pahud, Alexander Otte, Eyal Ofek, and Per Ola Kristensson
- Subjects
Finger tracking ,Computer science ,business.industry ,Head (linguistics) ,Headset ,Coordinate system ,Computer vision ,Tracking system ,Artificial intelligence ,Virtual reality ,Tracking (particle physics) ,business ,Commodity (Marxism) - Abstract
Representing users’ hands and fingers in virtual reality is crucial for many tasks. Recently, virtual reality head-mounted displays, capable of camera-based inside-out tracking and finger and hand tracking, are becoming popular and complement add-on solutions, such as Leap Motion.However, interacting with physical objects requires an accurate grounded positioning of the virtual reality coordinate system relative to relevant objects, and a good spatial positioning of the user’s fingers and hands.In order to get a better understanding of the capabilities of Virtual Reality headset finger tracking solutions for interacting with physical objects, we ran a controlled experiment (n =24) comparing two commodity hand and finger tracking systems (HTC Vive and Leap Motion) and report on the accuracy of commodity hand tracking systems.
- Published
- 2020
56. Inking Your Insights: Investigating Digital Externalization Behaviors During Data Analysis
- Author
-
Ken Hinckley, Bongshin Lee, Nathalie Henry Riche, Michel Pahud, Jessica Hullman, Matthew Brehmer, and Yea-Seul Kim
- Subjects
Annotation ,Externalization ,Computer science ,Process (engineering) ,Human–computer interaction ,Initial phase ,Recommender system ,Word (computer architecture) ,Note-taking ,Visualization - Abstract
Externalizing one's thoughts can be helpful during data analysis, such as which one marks interesting data, notes hypotheses, and draws diagrams. In this paper, we present two exploratory studies conducted to investigate types and use of externalizations during the analysis process. We first studied how people take notes during different stages of data analysis using VoyagerNote, a visualization recommendation system augmented to support text annotations, and coupled with participants' favorite external note-taking tools (e.g., word processor, pen & paper). Externalizations manifested mostly as notes written on paper or in a word processor, with annotations atop views used almost exclusively in the initial phase of analysis. In the second study, we investigated two specific opportunities: (1) integrating digital pen input to facilitate the use of free-form externalizations and (2) providing a more explicit linking between visualizations and externalizations. We conducted the study with VoyagerInk, a visualization system that enabled free-form externalization with a digital pen as well as touch interactions to link externalizations to data. Participants created more graphical externalizations with VoyagerInk and revisited over half of their externalizations via the linking mechanism. Reflecting on the findings from these two studies, we discuss implications for the design of data analysis tools.
- Published
- 2019
57. ReconViguRation: Reconfiguring Physical Keyboards in Virtual Reality
- Author
-
Jens Grubert, Alexander Otte, Daniel Schneider, Bastian Kuth, Oliver Dietz, Michel Pahud, Jörg Müller, Per Ola Kristensson, Mohamad Shahm Damlakhi, Philipp Gagel, Eyal Ofek, Travis Gesslein, Kristensson, Per Ola [0000-0002-7139-871X], and Apollo - University of Cambridge Repository
- Subjects
FOS: Computer and information sciences ,Physical Keyboards ,Computer science ,Computer Science - Human-Computer Interaction ,Text Entry ,Window manager ,02 engineering and technology ,Virtual reality ,computer.software_genre ,Human-Computer Interaction (cs.HC) ,H.5.2 ,Text processing ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Immersion (virtual reality) ,Leverage (statistics) ,Typing ,Macro ,Set (psychology) ,Password ,Virtual Reality ,020207 software engineering ,Computer Graphics and Computer-Aided Design ,Signal Processing ,Computer Vision and Pattern Recognition ,computer ,Software - Abstract
Physical keyboards are common peripherals for personal computers and are efficient standard text entry devices. Recent research has investigated how physical keyboards can be used in immersive head-mounted display-based Virtual Reality (VR). So far, the physical layout of keyboards has typically been transplanted into VR for replicating typing experiences in a standard desktop environment. In this paper, we explore how to fully leverage the immersiveness of VR to change the input and output characteristics of physical keyboard interaction within a VR environment. This allows individual physical keys to be reconfigured to the same or different actions and visual output to be distributed in various ways across the VR representation of the keyboard. We explore a set of input and output mappings for reconfiguring the virtual presentation of physical keyboards and probe the resulting design space by specifically designing, implementing and evaluating nine VR-relevant applications: emojis, languages and special characters, application shortcuts, virtual text processing macros, a window manager, a photo browser, a whack-a-mole game, secure password entry and a virtual touch bar. We investigate the feasibility of the applications in a user study with 20 participants and find that, among other things, they are usable in VR. We discuss the limitations and possibilities of remapping the input and output characteristics of physical keyboards in VR based on empirical findings and analysis and suggest future research directions in this area., Comment: to appear
- Published
- 2019
58. DrawMyPhoto
- Author
-
Blake Williford, Tracy Hammond, Doke Abhay Tanaji, Ken Hinckley, and Michel Pahud
- Subjects
Engineering drawing ,Parsing ,business.industry ,Computer science ,Sketch recognition ,media_common.quotation_subject ,Image processing ,computer.software_genre ,Grid ,Visual arts education ,Domain (software engineering) ,User experience design ,Quality (business) ,business ,computer ,media_common - Abstract
We present DrawMyPhoto, an interactive system that can assist a drawing novice in producing a quality drawing by automatically parsing a photograph in to a step-by-step drawing tutorial. The system utilizes image processing to produce distinct line work and shading steps from the photograph, and offers novel real-time feedback on pressure and tilt, along with grip suggestions as the user completes the tutorial. Our evaluation showed that the generated steps and real-time assistance allowed novices to produce significantly better drawings than with a more traditional grid-based approach, particularly with respect to accuracy, shading, and details. This was confirmed by domain experts who blindly rated the drawings. The participants responded well to the real-time feedback, and believed it helped them learn proper shading techniques and the order in which a drawing should be approached. We saw promising potential in the tool to boost the confidence of novices and lower the barrier to artistic creation.
- Published
- 2019
59. Sensing Posture-Aware Pen+Touch Interaction on Tablets
- Author
-
Gierad Laput, Michael J. McGuffin, Yang Zhang, Haijun Xia, Christian Holz, Ken Hinckley, Andrew Pyon Mittereder, William A. S. Buxton, Michel Pahud, Xiao Tu, and Fei Su
- Subjects
Orientation (computer vision) ,Computer science ,05 social sciences ,GRASP ,020207 software engineering ,02 engineering and technology ,law.invention ,Touchscreen ,Human–computer interaction ,law ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,User interface ,050107 human factors - Abstract
Many status-quo interfaces for tablets with pen + touch input capabilities force users to reach for device-centric UI widgets at fixed locations, rather than sensing and adapting to the user-centric posture. To address this problem, we propose sensing techniques that transition between various nuances of mobile and stationary use via postural awareness. These postural nuances include shifting hand grips, varying screen angle and orientation, planting the palm while writing or sketching, and detecting what direction the hands approach from. To achieve this, our system combines three sensing modalities: 1) raw capacitance touchscreen images, 2) inertial motion, and 3) electric field sensors around the screen bezel for grasp and hand proximity detection. We show how these sensors enable posture-aware pen+touch techniques that adapt interaction and morph user interface elements to suit fine-grained contexts of body-, arm-, hand-, and grip-centric frames of reference.
- Published
- 2019
60. DataToon
- Author
-
Benjamin Bach, Nathalie Henry Riche, Guanpeng Xu, Ken Hinckley, Michel Pahud, Michael J. McGuffin, Matthew Brehmer, Nam Wook Kim, Hanspeter Pfister, and Haijun Xia
- Subjects
Dynamic network analysis ,Pluralistic walkthrough ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,business.industry ,Computer science ,media_common.quotation_subject ,ComputingMilieux_PERSONALCOMPUTING ,Comics ,GeneralLiterature_MISCELLANEOUS ,Visualization ,Presentation ,Data visualization ,Human–computer interaction ,Narrative ,business ,media_common ,Storytelling - Abstract
Comics are an entertaining and familiar medium for presenting compelling stories about data. However, existing visualization authoring tools do not leverage this expressive medium. In this paper, we seek to incorporate elements of comics into the construction of data-driven stories about dynamic networks. We contribute DataToon, a flexible data comic storyboarding tool that blends analysis and presentation with pen and touch interactions. A storyteller can use DataToon rapidly generate visualization panels, annotate them, and position them within a canvas to produce a visually compelling narrative. In a user study, participants quickly learned to use DataToon for producing data comics.
- Published
- 2019
61. Text Entry in Immersive Head-Mounted Display-Based Virtual Reality Using Standard Keyboards
- Author
-
Lukas Witzani, Jens Grubert, Eyal Ofek, Per Ola Kristensson, Michel Pahud, Matthias Kranz, Kristensson, Per Ola [0000-0002-7139-871X], and Apollo - University of Cambridge Repository
- Subjects
FOS: Computer and information sciences ,Computer science ,business.industry ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,05 social sciences ,Computer Science - Human-Computer Interaction ,Optical head-mounted display ,020207 software engineering ,02 engineering and technology ,Virtual reality ,Electronic mail ,Human-Computer Interaction (cs.HC) ,law.invention ,H.5.2 ,Touchscreen ,User experience design ,law ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Typing ,User interface ,business ,050107 human factors ,H.5.2: [User Interfaces - Input devices and strategies] - Abstract
We study the performance and user experience of two popular mainstream text entry devices, desktop keyboards and touchscreen keyboards, for use in Virtual Reality (VR) applications. We discuss the limitations arising from limited visual feedback, and examine the efficiency of different strategies of use. We analyze a total of 24 hours of typing data in VR from 24 participants and find that novice users are able to retain about 60% of their typing speed on a desktop keyboard and about 40-45\% of their typing speed on a touchscreen keyboard. We also find no significant learning effects, indicating that users can transfer their typing skills fast into VR. Besides investigating baseline performances, we study the position in which keyboards and hands are rendered in space. We find that this does not adversely affect performance for desktop keyboard typing and results in a performance trade-off for touchscreen keyboard typing., Comment: IEEE VR 2018. arXiv admin note: text overlap with arXiv:1802.00613
- Published
- 2018
- Full Text
- View/download PDF
62. Effects of Hand Representations for Typing in Virtual Reality
- Author
-
Lukas Witzani, Matthias Kranz, Per Ola Kristensson, Jens Grubert, Eyal Ofek, and Michel Pahud
- Subjects
FOS: Computer and information sciences ,Computer science ,05 social sciences ,Computer Science - Human-Computer Interaction ,020207 software engineering ,02 engineering and technology ,Kinematics ,Virtual reality ,Electronic mail ,Human-Computer Interaction (cs.HC) ,Visualization ,H.5.2 ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Typing ,User interface ,Representation (mathematics) ,050107 human factors - Abstract
Alphanumeric text entry is a challenge for Virtual Reality (VR) applications. VR enables new capabilities, impossible in the real world, such as an unobstructed view of the keyboard, without occlusion by the user's physical hands. Several hand representations have been proposed for typing in VR on standard physical keyboards. However, to date, these hand representations have not been compared regarding their performance and effects on presence for VR text entry. Our work addresses this gap by comparing existing hand representations with minimalistic fingertip visualization. We study the effects of four hand representations (no hand representation, inverse kinematic model, fingertip visualization using spheres and video inlay) on typing in VR using a standard physical keyboard with 24 participants. We found that the fingertip visualization and video inlay both resulted in statistically significant lower text entry error rates compared to no hand or inverse kinematic model representations. We found no statistical differences in text entry speed., IEEE VR 2018 publication
- Published
- 2018
63. Mobiles as Portals for Interacting with Virtual Data Visualizations
- Author
-
Michel Pahud, Eyal Ofek, Nathalie Henry Riche, Christophe Hurter, Jens Grubert, Microsoft Corporation [Redmond], Microsoft Corporation [Redmond, Wash.], Microsoft Research [Redmond], Ecole Nationale de l'Aviation Civile (ENAC), and Coburg University of Applied Sciences and Arts
- Subjects
Mobile Devices ,FOS: Computer and information sciences ,Window ,ACM Classification Keywords H.5.2. Information interfaces and presentation: (e.g., HCI): Input ,Data Visualization ,ACM: H.: Information Systems/H.5: INFORMATION INTERFACES AND PRESENTATION (e.g., HCI)/H.5.2: User Interfaces ,Computer Science - Human-Computer Interaction ,Portal ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Compound Navigation ,Human-Computer Interaction (cs.HC) - Abstract
International audience; We propose a set of techniques leveraging mobile devices as lenses to explore, interact and annotate n-dimensional data visualizations. The democratization of mobile devices, with their arrays of integrated sensors, opens up opportunities to create experiences for anyone to explore and interact with large information spaces anywhere. In this paper, we propose to revisit ideas behind the Chameleon prototype of Fitzmaurice et al. initially envisioned in the 90s for navigation, before spatially-aware devices became mainstream. We also take advantage of other input modalities such as pen and touch to not only navigate the space using the mobile as a lens, but interact and annotate it by adding toolglasses.
- Published
- 2018
- Full Text
- View/download PDF
64. The utility of Magic Lens interfaces on handheld devices for touristic map navigation
- Author
-
Jens Grubert, Raphael Grasset, Hartmut Seichter, Michel Pahud, and Dieter Schmalstieg
- Subjects
Peephole ,Computer Networks and Communications ,Computer science ,business.industry ,Interface (computing) ,Usability ,Workspace ,GeneralLiterature_MISCELLANEOUS ,Computer Science Applications ,law.invention ,Lens (optics) ,User experience design ,Hardware and Architecture ,Human–computer interaction ,law ,Computer graphics (images) ,Augmented reality ,business ,Mobile device ,Software ,Information Systems - Abstract
This paper investigates the utility of the Magic Lens metaphor on small screen handheld devices for map navigation given state of the art computer vision tracking. We investigate both performance and user experience aspects. In contrast to previous studies a semi-controlled field experiment ( n = 18 ) in a ski resort indicated significantly longer task completion times for a Magic Lens compared to a Static Peephole interface in an information browsing task. A follow-up controlled laboratory study ( n = 21 ) investigated the impact of the workspace size on the performance and usability of both interfaces. We show that for small workspaces Static Peephole outperforms Magic Lens. As workspace size increases performance gets equivalent and subjective measurements indicate less demand and better usability for Magic Lens. Finally, we discuss the relevance of our findings for the application of Magic Lens interfaces for map interaction in touristic contexts. Investigation of Magic Lens and Static Peephole on smartphones for maps.Two experiments: semi-controlled field experiment in a ski resort and lab study.For A0 sized posters Magic Lens is slower and less preferred.For larger workspace sizes performance between interfaces is equivalent.Magic Lens interaction results in better usability for large workspaces.
- Published
- 2015
65. WritLarge
- Author
-
Michel Pahud, Ken Hinckley, Bill Buxton, Xiao Tu, and Haijun Xia
- Subjects
Multimedia ,Scope (project management) ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Movement (music) ,Computer science ,05 social sciences ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Action (philosophy) ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Selection (linguistics) ,0501 psychology and cognitive sciences ,Zoom ,computer ,050107 human factors ,Gesture - Abstract
WritLarge is a freeform canvas for early-stage design on electronic whiteboards with pen+touch input. The system aims to support a higher-level flow of interaction by 'chunking' the traditionally disjoint steps of selection and action into unified selection-action phrases. This holistic goal led us to address two complementary aspects: SELECTION, for which we devise a new technique known as the Zoom-Catcher that integrates pinch-to-zoom and selection in a single gesture for fluidly selecting and acting on content; plus: ACTION, where we demonstrate how this addresses the combined issues of navigating, selecting, and manipulating content. In particular, the designer can transform select ink strokes in flexible and easily-reversible representations via semantic, structural, and temporal axes of movement that are defined as conceptual 'moves' relative to the specified content. This approach dovetails zooming with lightweight specification of scope as well as the evocation of context-appropriate commands, at-hand, in a location-independent manner. This establishes powerful new primitives that can help to scaffold higher-level tasks, thereby unleashing the expressive power of ink in a compelling manner.
- Published
- 2017
66. Thumb + Pen Interaction on Tablets
- Author
-
Ken Hinckley, Michel Pahud, Ken Pfeuffer, and Bill Buxton
- Subjects
Class (computer programming) ,Point (typography) ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Thumb ,Object (computer science) ,medicine.anatomical_structure ,Leverage (negotiation) ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,0501 psychology and cognitive sciences ,Set (psychology) ,050107 human factors ,Simulation - Abstract
Modern tablets support simultaneous pen and touch input, but it remains unclear how to best leverage this capability for bimanual input when the nonpreferred hand holds the tablet. We explore Thumb + Pen interactions that support simultaneous pen and touch interaction, with both hands, in such situations. Our approach engages the thumb of the device-holding hand, such that the thumb interacts with the touch screen in an indirect manner, thereby complementing the direct input provided by the preferred hand. For instance, the thumb can determine how pen actions (articulated with the opposite hand) are interpreted. Alternatively, the pen can point at an object, while the thumb manipulates one or more of its parameters through indirect touch. Our techniques integrate concepts in a novel way that derive from marking menus, spring-loaded modes, indirect input, and multi-touch conventions. Our overall approach takes the form of a set of probes, each representing a meaningfully distinct class of application. They serve as an initial exploration of the design space at a level which will help determine the feasibility of supporting bimanual interaction in such contexts, and the viability of the Thumb + Pen techniques in so doing.
- Published
- 2017
67. Insights from Exploration of Engaging Technologies to Teach Reading and Writing: Story Baker
- Author
-
Michel Pahud
- Subjects
Focus (computing) ,Multimedia ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Reading (process) ,media_common.quotation_subject ,Robot ,Art ,computer.software_genre ,Stylus ,computer ,media_common ,Visual arts - Abstract
To engage children in learning to write, we spent several years exploring tools designed to engage children in creating and viewing stories. Our central focus was the automatic generation of animations. Tools included a digital stylus for writing and sketching, and in some cases simple robots and tangible, digitally-recognized objects. In pilot studies, children found the prototypes engaging. In 2007, a decision not to develop new hardware was made, but at today’s greatly reduced tablet cost and with more capable touch and pen technology, these experiments could inspire further research and development.
- Published
- 2017
68. GlassHands
- Author
-
Matthias Kranz, Dieter Schmalstieg, Eyal Ofek, Jens Grubert, and Michel Pahud
- Subjects
Engineering ,business.industry ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Space (commercial competition) ,Set (abstract data type) ,Human–computer interaction ,Computer graphics (images) ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,business ,Mobile device ,Mobile interaction ,050107 human factors ,Gesture - Abstract
We present a novel approach for extending the input space around unmodified mobile devices. Using built-in front-facing cameras of unmodified handheld devices, GlassHands estimates hand poses and gestures through reflections in sunglasses, ski goggles or visors. Thereby, GlassHands creates an enlarged input space, rivaling input reach on large touch displays. We introduce the idea along with its technical concept and implementation. We demonstrate the feasibility and potential of our proposed approach in several application scenarios, such as map browsing or drawing using a set of interaction techniques previously possible only with modified mobile devices or on large touch displays. Our research is backed up with a user study.
- Published
- 2016
69. Wearables as Context for Guiard-abiding Bimanual Touch
- Author
-
Bill Buxton, Michel Pahud, Andrew M. Webb, and Ken Hinckley
- Subjects
Which hand ,Multimedia ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,business.industry ,05 social sciences ,Wearable computer ,020207 software engineering ,Context (language use) ,02 engineering and technology ,Direct touch ,computer.software_genre ,Preference ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Natural (music) ,0501 psychology and cognitive sciences ,Set (psychology) ,business ,computer ,050107 human factors ,Wearable technology - Abstract
We explore the contextual details afforded by wearable devices to support multi-user, direct-touch interaction on electronic whiteboards in a way that-unlike previous work-can be fully consistent with natural bimanual-asymmetric interaction as set forth by Guiard.Our work offers the following key observation. While Guiard's framework has been widely applied in HCI, for bimanual interfaces where each hand interacts via direct touch, subtle limitations of multi-touch technologies as well as limitations in conception and design-mean that the resulting interfaces often cannot fully adhere to Guiard's principles even if they want to. The interactions are fundamentally ambiguous because the system does not know which hand, left or right, contributes each touch. But by integrating additional context from wearable devices, our system can identify which user is touching, as well as distinguish what hand they use to do so. This enables our prototypes to respect lateral preference the assignment of natural roles to each hand as advocated by Guiard in a way that has not been articulated before.
- Published
- 2016
70. Pre-Touch Sensing for Mobile Interaction
- Author
-
Richard M. Banks, William A. S. Buxton, Gavin Smyth, Abigail Sellen, Christian Holz, Ken Hinckley, Seongkook Heo, Michel Pahud, Kenton O'Hara, and Hrvoje Benko
- Subjects
Modality (human–computer interaction) ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,Interface (computing) ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Context menu ,law.invention ,Touchscreen ,Human–computer interaction ,law ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Mobile device ,Mobile interaction ,050107 human factors ,Simulation ,Gesture - Abstract
Touchscreens continue to advance including progress towards sensing fingers proximal to the display. We explore this emerging pre-touch modality via a self-capacitance touchscreen that can sense multiple fingers above a mobile device, as well as grip around the screen's edges. This capability opens up many possibilities for mobile interaction. For example, using pre-touch in an anticipatory role affords an "ad-lib interface" that fades in a different UI--appropriate to the context--as the user approaches one-handed with a thumb, two-handed with an index finger, or even with a pinch or two thumbs. Or we can interpret pre-touch in a retroactive manner that leverages the approach trajectory to discern whether the user made contact with a ballistic vs. a finely-targeted motion. Pre-touch also enables hybrid touch + hover gestures, such as selecting an icon with the thumb while bringing a second finger into range to invoke a context menu at a convenient location. Collectively these techniques illustrate how pre-touch sensing offers an intriguing new back-channel for mobile interaction.
- Published
- 2016
71. Sensing Tablet Grasp + Micro-mobility for Active Reading
- Author
-
François Guimbretière, Hrvoje Benko, Ken Hinckley, Dongwook Yoon, Pourang Irani, Michel Pahud, and Marcel Gavriliu
- Subjects
Human–computer interaction ,Computer science ,Orientation (computer vision) ,Bookmarking ,Capacitive sensing ,GRASP ,Context (language use) ,Multiple device ,Active reading ,Expression (mathematics) ,Simulation - Abstract
The orientation and repositioning of physical artefacts (such as paper documents) to afford shared viewing of content, or to steer the attention of others to specific details, is known as micro-mobility. But the role of grasp in micro-mobility has rarely been considered, much less sensed by devices. We therefore employ capacitive grip sensing and inertial motion to explore the design space of combined grasp + micro-mobility by considering three classes of technique in the context of active reading. Single user, single device techniques support grip-influenced behaviors such as bookmarking a page with a finger, but combine this with physical embodiment to allow flipping back to a previous location. Multiple user, single device techniques, such as passing a tablet to another user or working side-by-side on a single device, add fresh nuances of expression to co-located collaboration. And single user, multiple device techniques afford facile cross-referencing of content across devices. Founded on observations of grasp and micro-mobility, these techniques open up new possibilities for both individual and collaborative interaction with electronic documents.
- Published
- 2015
72. Sensing techniques for tablet+stylus interaction
- Author
-
François Guimbretière, Marcel Gavriliu, Pourang Irani, Andrew D. Wilson, Fabrice Matulic, Hrvoje Benko, Ken Hinckley, Xiang 'Anthony' Chen, William A. S. Buxton, and Michel Pahud
- Subjects
Which hand ,Stroke work ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Orientation (computer vision) ,Human–computer interaction ,Computer science ,Computer graphics (images) ,Motion sensing ,Context (language use) ,Stylus ,ComputingMethodologies_COMPUTERGRAPHICS ,Gesture - Abstract
We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen + touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures produced by the nonpreferred hand, from touch gestures produced by the hand holding the pen, which necessarily impart a detectable motion signal to the stylus. We can sense which hand grips the tablet, and determine the screen's relative orientation to the pen. By selectively combining these signals and using them to complement one another, we can tailor interaction to the context, such as by ignoring unintentional touch inputs while writing, or supporting contextually-appropriate tools such as a magnifier for detailed stroke work that appears when the user pinches with the pen tucked between his fingers. These and other techniques can be used to impart new, previously unanticipated subtleties to pen + touch interaction on tablets.
- Published
- 2014
73. [D66] TouchMover 2.0
- Author
-
Mike Sinclair, Hrvoje Benko, and Michel Pahud
- Subjects
Plane (geometry) ,business.industry ,Computer science ,Texture (music) ,Stereo display ,Topographic map ,law.invention ,Touchscreen ,law ,Computer graphics (images) ,Perpendicular ,Computer vision ,Artificial intelligence ,business ,Actuator ,Haptic technology - Abstract
Summary form only given. Our actuated immersive 3D display is capable of providing 1D movement and haptic screen force feedback in a single dimension perpendicular to the screen plane, and has an additional capability to render haptic texture cues via vibrotactile actuators attached to the touchscreen. We will demonstrate: 1) touch and feel the 3D contour and 2D texture of a topographic map, and 2) interaction with 3D objects by pushing on the screen with realistic force feedback, and 3) intuitively explore and feel pseudo tissue texture within volumetric data such as medical images (e.g., MRI brain scan).
- Published
- 2014
74. Toward compound navigation tasks on mobiles via spatial manipulation
- Author
-
Abigail Sellen, Ken Hinckley, Bill Buxton, Michel Pahud, and Shamsi T. Iqbal
- Subjects
3D interaction ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,business.industry ,Panning (audio) ,Context (language use) ,GeneralLiterature_MISCELLANEOUS ,law.invention ,Lens (optics) ,Task (computing) ,law ,Computer vision ,Artificial intelligence ,Zoom ,business ,Mobile device ,Gesture - Abstract
We contrast the Chameleon Lens, which uses 3D movement of a mobile device held in the nonpreferred hand to support panning and zooming, with the Pinch-Flick-Drag metaphor of directly manipulating the view using multi-touch gestures. Lens-like approaches have significant potential because they can support navigation-selection, navigation-annotation, and other such compound tasks by off-loading navigation to the nonpreferred hand while the preferred hand annotates, marks a location, or draws a path on the screen. Our experimental results show that the Chameleon Lens is significantly slower than Pinch-Flick-Drag for the navigation subtask in isolation. But our studies also reveal that for navigation between a few known targets the lens performs significantly faster, that differences between the Chameleon Lens and Pinch-Flick-Drag rapidly diminish as users gain experience, and that in the context of a compound navigation-annotation task, the lens performs as well as Pinch-Flick-Drag despite its deficit for the navigation subtask itself.
- Published
- 2013
75. Informal information gathering techniques for active reading
- Author
-
Michel Pahud, Ken Hinckley, Bill Buxton, and Xiaojun Bi
- Subjects
Multimedia ,Casual ,Computer science ,Human–computer interaction ,Reading (process) ,media_common.quotation_subject ,Object (computer science) ,computer.software_genre ,computer ,Scope (computer science) ,Task (project management) ,media_common ,Clipboard - Abstract
GatherReader is a prototype e-reader with both pen and multi-touch input that illustrates several interesting design trade-offs to fluidly interleave content consumption behaviors (reading and flipping through pages) with information gathering and informal organization activities geared to active reading tasks. These choices include (1) relaxed precision for casual specification of scope; (2) multiple object collection via a visual clipboard; (3) flexible workflow via deferred action; and (4) complementary use of pen+touch. Our design affords active reading by limiting the transaction costs for secondary subtasks, while keeping users in the flow of the primary task of reading itself.
- Published
- 2012
76. VisTACO
- Author
-
Bill Buxton, Sheelagh Carpendale, Anthony Tang, and Michel Pahud
- Subjects
Information visualization ,Computer science ,Process (engineering) ,business.industry ,Human–computer interaction ,business ,Interactive visualization ,Bridge (nautical) - Abstract
As we design tabletop technologies, it is important to also understand how they are being used. Many prior researchers have developed visualizations of interaction data from their studies to illustrate ideas and concepts. In this work, we develop an interactional model of tabletop collaboration, which informs the design of VisTACO, an interactive visualization tool for tabletop collaboration. Using VisTACO, we can explore the interactions of collaborators with the tabletop to identify patterns or unusual spatial behaviours, supporting the analysis process. VisTACO helps bridge the gap between observing the use of a tabletop system, and understanding users' interactions with the system.
- Published
- 2010
77. Pen + touch = new tools
- Author
-
Koji Yatani, Andrew D. Wilson, Ken Hinckley, Nicole Coddington, Bill Buxton, Hrvoje Benko, Jenny Rodenhouse, and Michel Pahud
- Subjects
Focus (computing) ,Active pen ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,Computer graphics (images) ,Mode switching ,Workspace ,Object (computer science) ,Muscular tension ,ComputingMethodologies_COMPUTERGRAPHICS ,Gesture ,PATH (variable) - Abstract
We describe techniques for direct pen+touch input. We observe people's manual behaviors with physical paper and notebooks. These serve as the foundation for a prototype Microsoft Surface application, centered on note-taking and scrapbooking of materials. Based on our explorations we advocate a division of labor between pen and touch: the pen writes, touch manipulates, and the combination of pen + touch yields new tools. This articulates how our system interprets unimodal pen, unimodal touch, and multimodal pen+touch inputs, respectively. For example, the user can hold a photo and drag off with the pen to create and place a copy; hold a photo and cross it in a freeform path with the pen to slice it in two; or hold selected photos and tap one with the pen to staple them all together. Touch thus unifies object selection with mode switching of the pen, while the muscular tension of holding touch serves as the "glue" that phrases together all the inputs into a unitary multimodal gesture. This helps the UI designer to avoid encumbrances such as physical buttons, persistent modes, or widgets that detract from the user's focus on the workspace.
- Published
- 2010
78. Manual deskterity
- Author
-
Koji Yatani, Hrvoje Benko, Bill Buxton, Nicole Coddington, Andrew D. Wilson, Michel Pahud, Ken Hinckley, and Jenny Rodenhouse
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,Computer graphics (images) ,Table (database) ,Gesture - Abstract
Manual Deskterity is a prototype digital drafting table that supports both pen and touch input. We explore a division of labor between pen and touch that flows from natural human skill and differentiation of roles of the hands. We also explore the simultaneous use of pen and touch to support novel compound gestures.
- Published
- 2010
79. Three's company
- Author
-
Michel Pahud, Bill Buxton, Kori Inkpen, Hrvoje Benko, John C. Tang, and Anthony Tang
- Subjects
World Wide Web ,Metaphor ,Human–computer interaction ,Computer science ,media_common.quotation_subject ,Three way ,Identity (object-oriented programming) ,Workspace ,Video mediated communication ,Distributed collaboration ,Media space ,media_common - Abstract
We explore the design of a system for three-way collaboration over a shared visual workspace, specifically in how to support three channels of communication: person, reference, and task-space. In two studies, we explore the implications of extending designs intended for dyadic collaboration to three-person groups, and the role of each communication channel. Our studies illustrate the utility of multiple configurations of users around a distributed workspace, and explore the subtleties of traditional notions of identity, awareness, spatial metaphor, and corporeal embodiments as they relate to three-way collaboration.
- Published
- 2010
80. Performance Analysis of a Parallel Program for Wave Propagation Simulation
- Author
-
Thierry Cornu, Michel Pahud, Frédéric Guidec, Guidec, Frédéric, and S. Lengauer, M. Griebl, S. Gorlatch
- Subjects
[INFO.INFO-RO] Computer Science [cs]/Operations Research [cs.RO] ,[INFO.INFO-NI] Computer Science [cs]/Networking and Internet Architecture [cs.NI] ,Wave propagation ,Computer science ,Computation ,Workload ,Function (mathematics) ,Computational science ,[INFO.INFO-MC] Computer Science [cs]/Mobile Computing ,Factor (programming language) ,Performance prediction ,Key (cryptography) ,[INFO.INFO-DC] Computer Science [cs]/Distributed, Parallel, and Cluster Computing [cs.DC] ,[INFO.INFO-MO] Computer Science [cs]/Modeling and Simulation ,computer ,computer.programming_language - Abstract
During the last decade, performance prediction has been repeatedly quoted as a key factor to developing parallel systems. Predicting the performance of a parallel program as a function of the number of processors and of the problem size is crucial for choosing the best hardware configuration and for tuning various parameters. This paper presents a method for achieving performance analysis for parallel irregular applications. The model is closely related to the Bulk Synchronous Programming (BSP) model [4]. It is based on the measurement of basic communication and computation routines. The computational workload of each processor and the load imbalance are modeled analytically. The method is used for predicting the performances of ParFlow++, an irregular, parallel radio-wave propagation algorithm.
- Published
- 1997
81. Contention in the Cray T3D communication network
- Author
-
Michel Pahud and Thierry Cornu
- Subjects
Interconnection ,Network architecture ,Computer science ,business.industry ,Distributed computing ,media_common.quotation_subject ,Broadcast communication network ,Locality ,Function (engineering) ,business ,Telecommunications network ,media_common ,Computer network - Abstract
This paper studies the effect of contention on communication times in the interconnection network of the Cray T3D computer. We propose a method to measure average communication time as a function of the utilization rate of the communication network; the results are presented. It is shown that locality of communications help reduce contention effects and can therefore have a non-negligible impact on the performance of the system.
- Published
- 1996
82. 38.2: Direct Display Interaction via Simultaneous Pen + Multi-touch Input
- Author
-
Bill Buxton, Michel Pahud, and Ken Hinckley
- Subjects
Engineering ,Modalities ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,business.industry ,Computer graphics (images) ,Multi-touch ,Gold standard (test) ,business - Abstract
Current developments hint at a rapidly approaching future where simultaneous pen + multi-touch input becomes the gold standard for direct interaction on displays. We are motivated by a desire to extend pen and multi-touch input modalities, including their use in concert, to enable users to take better advantage of each.
- Published
- 2010
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.