36 results on '"Jacob O. Wobbrock"'
Search Results
2. Group Touch
- Author
-
Katie Davis, James Fogarty, Jacob O. Wobbrock, and Abigail Evans
- Subjects
Multimedia ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Group (mathematics) ,Computer science ,Orientation (computer vision) ,Field data ,05 social sciences ,020207 software engineering ,Statistical model ,02 engineering and technology ,computer.software_genre ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Instrumentation (computer programming) ,computer ,050107 human factors - Abstract
We present Group Touch, a method for distinguishing among multiple users simultaneously interacting with a tabletop computer using only the touch information supplied by the device. Rather than tracking individual users for the duration of an activity, Group Touch distinguishes users from each other by modeling whether an interaction with the tabletop corresponds to either: (1) a new user, or (2) a change in users currently interacting with the tabletop. This reframing of the challenge as distinguishing users rather than tracking and identifying them allows Group Touch to support multi-user collaboration in real-world settings without custom instrumentation. Specifically, Group Touch examines pairs of touches and uses the difference in orientation, distance, and time between two touches to determine whether the same person performed both touches in the pair. Validated with field data from high-school students in a classroom setting, Group Touch distinguishes among users "in the wild" with a mean accuracy of 92.92% (SD=3.94%). Group Touch can imbue collaborative touch applications in real-world settings with the ability to distinguish among multiple users.
- Published
- 2017
- Full Text
- View/download PDF
3. How Designing for People With and Without Disabilities Shapes Student Design Thinking
- Author
-
Cynthia L. Bennett, Kristen Shinohara, and Jacob O. Wobbrock
- Subjects
Multimedia ,05 social sciences ,020207 software engineering ,Design thinking ,02 engineering and technology ,computer.software_genre ,Assistive technology ,0202 electrical engineering, electronic engineering, information engineering ,ComputingMilieux_COMPUTERSANDSOCIETY ,Mainstream ,0501 psychology and cognitive sciences ,Engineering ethics ,Engineering design process ,Psychology ,computer ,050107 human factors - Abstract
Despite practices addressing disability in design and advocating user-centered design (UCD) approaches, popular mainstream technologies remain largely inaccessible for people with disabilities. We conducted a design course study investigating how student designers regard disability and explored how designing for both disabled and non-disabled users encouraged students to think about accessibility throughout the design process. Students focused on a design project while learning UCD concepts and techniques, working with people with and without disabilities throughout the project. We found that designing for both disabled and non-disabled users surfaced challenges and tensions in finding solutions to satisfy both groups, influencing students' attitudes toward accessible design. In addressing these tensions, non-functional aspects of accessible design emerged as important complements to functional aspects for users with and without disabilities.
- Published
- 2016
- Full Text
- View/download PDF
4. Session details: Session 2B: Interaction Techniques
- Author
-
Jacob O. Wobbrock
- Subjects
Multimedia ,Computer science ,Session (computer science) ,computer.software_genre ,computer - Published
- 2016
- Full Text
- View/download PDF
5. Gestures by Children and Adults on Touch Tables and Touch Walls in a Public Science Center
- Author
-
Lisa Anthony, Kathryn A. Stofer, Annie Luc, and Jacob O. Wobbrock
- Subjects
Multimedia ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Public displays ,computer.software_genre ,law.invention ,Child computer interaction ,Touchscreen ,Human–computer interaction ,law ,0202 electrical engineering, electronic engineering, information engineering ,Table (database) ,0501 psychology and cognitive sciences ,Center (algebra and category theory) ,computer ,050107 human factors ,Gesture - Abstract
Research on children's interactions with touchscreen devices has examined small and large screens and compared interaction to adults or among children of different ages. Little work has explicitly compared interaction on different platforms, however. Large touchscreen displays can be deployed flat, as in a table, or vertically, as on a wall. While these two form factors have been studied, it is not known what differences may exist between them. We present a study of visitors to a science museum, including children and their parents, who interacted with Google Earth on either a touch table or a touch wall. We compare the types of gestures and interactions attempted on each device and find several interesting results, including: users of all ages tend to make standard touchscreen gestures on both platforms, but children were more likely than adults to try new gestures. Users were more likely to perform two-handed, multi-touch gestures on the touch wall than on the touch table. Our findings will inform the design of future interactive applications for each platform.
- Published
- 2016
- Full Text
- View/download PDF
6. Modeling Collaboration Patterns on an Interactive Tabletop in a Classroom Setting
- Author
-
Jacob O. Wobbrock, Abigail Evans, and Katie Davis
- Subjects
Multimedia ,Computer science ,Process (engineering) ,media_common.quotation_subject ,05 social sciences ,050301 education ,Collaborative learning ,computer.software_genre ,Field (computer science) ,Human–computer interaction ,Table (database) ,0501 psychology and cognitive sciences ,Quality (business) ,0503 education ,computer ,050107 human factors ,Educational software ,media_common - Abstract
Interaction logs generated by educational software can provide valuable insights into the collaborative learning process and identify opportunities for technology to provide adaptive assistance. Modeling collaborative learning processes at tabletop computers is challenging, as the computer is only able to log a portion of the collaboration, namely the touch events on the table. Our previous lab study with adults showed that patterns in a group's touch interactions with a tabletop computer can reveal the quality of aspects of their collaborative process. We extend this understanding of the relationship between touch interactions and the collaborative process to adolescent learners in a field setting and demonstrate that the touch patterns reflect the quality of collaboration more broadly than previously thought, with accuracies up to 84.2%. We also present an approach to using the touch patterns to model the quality of collaboration in real-time.
- Published
- 2016
- Full Text
- View/download PDF
7. A robust design for accessible text entry
- Author
-
Jacob O. Wobbrock
- Subjects
Multimedia ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,Geography, Planning and Development ,Graffiti ,Variety (linguistics) ,computer.software_genre ,Robust design ,Mobile phone ,Joystick ,ComputingMilieux_COMPUTERSANDSOCIETY ,Enhanced Data Rates for GSM Evolution ,Text entry ,Stylus ,computer - Abstract
This paper describes the author's dissertation research on designing, implementing, and evaluating the Edge Write text entry method. The goal of this research is to develop a method that is highly "robust," remaining accessible and accurate across a variety of devices, abilities, circumstances, and constraints. Edge Write is particularly aimed at users with motor impairments and able-bodied users "on the go." To date, this research has resulted in versions of Edge Write for PDAs, touchpads, displacement joysticks, isometric joysticks, trackballs, 4-keys, and more, all of which use the same Edge Write alphabet and concepts. The stylus version, for instance, has been shown to be significantly more accurate than Graffiti for both able-bodied and motor-impaired users. Similarly, the trackball version has been shown to be better than on-screen keyboards for some people who use trackballs due to motor impairments. This paper discusses these and other achievements, and points towards future work on a mobile phone version for situationally-impaired users. From its inception, Edge Write has been developed with the help of participants, both able-bodied and motor-impaired.
- Published
- 2006
- Full Text
- View/download PDF
8. Improving Pointing in Graphical User Interfaces for People with Motor Impairments Through Ability-Based Design
- Author
-
Jacob O. Wobbrock
- Subjects
Multimedia ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,business.industry ,Computer science ,Human–computer interaction ,User interface ,computer.software_genre ,business ,computer ,Graphical user interface - Abstract
Pointing to targets in graphical user interfaces remains a frequent and fundamental necessity in modern computing systems. Yet for millions of people with motor impairments, children, and older users, pointing—whether with a mouse cursor, a stylus, or a finger on a touch screen—remains a major access barrier because of the fine-motor skills required. In a series of projects inspired by and contributing to ability-based design, we have reconsidered the nature and assumptions behind pointing, resulting in changes to how mouse cursors work, the types of targets used, the way interfaces are designed and laid out, and even how input devices are used. The results from these explorations show that people with motor difficulties can acquire targets in graphical user interfaces when interfaces are designed to better match the abilities of their users. Ability-based design, as both a design philosophy and a design approach, provides a route to realizing a future in which people can utilize whatever abilities they have to express themselves not only to machines, but to the world.
- Published
- 2014
- Full Text
- View/download PDF
9. Analyzing the intelligibility of real-time mobile sign language video transmitted below recommended standards
- Author
-
Jacob O. Wobbrock, Richard E. Ladner, Ben Flowers, Eve A. Risken, and Jessica J. Tran
- Subjects
Standardization ,Multimedia ,American Sign Language ,Computer science ,Speech recognition ,Intelligibility (communication) ,Sign language ,Frame rate ,computer.software_genre ,language.human_language ,Bit rate ,language ,computer ,Data compression ,Fingerspelling - Abstract
Mobile sign language video communication has the potential to be more accessible and affordable if the current recommended video transmission standard of 25 frames per second at 100 kilobits per second (kbps) as prescribed in the International Telecommunication Standardization Sector (ITU-T) Q.26/16 were relaxed. To investigate sign language video intelligibility at lower settings, we conducted a laboratory study, where fluent ASL signers in pairs held real-time free-form conversations over an experimental smartphone app transmitting real-time video at 5 fps/25 kbps, 10 fps/50 kbps, 15 fps/75 kbps, and 30 fps/150 kbps, settings well below the ITU-T standard that save both bandwidth and battery life. The aim of the laboratory study was to investigate how fluent ASL signers adapt to the lower video transmission rates, and to identify a lower threshold at which intelligible real-time conversations could be held. We gathered both subjective and objective measures from participants and calculated battery life drain. As expected, reducing the frame rate/bit rate monotonically extended the battery life. We discovered all participants were successful in holding intelligible conversations across all frame rates/bit rates. Participants did perceive the lower quality of video transmitted at 5 fps/25 kbps and felt that they were signing more slowly to compensate; however, participants' rate of fingerspelling did not actually decrease. This and other findings support our recommendation that intelligible mobile sign language conversations can occur at frame rates as low as 10 fps/50 kbps while optimizing resource consumption, video intelligibility, and user preferences.
- Published
- 2014
- Full Text
- View/download PDF
10. Touchplates
- Author
-
Jacob O. Wobbrock, Meredith Ringel Morris, and Shaun K. Kane
- Subjects
Multimedia ,Blindness ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,business.industry ,Visually impaired ,Overlay ,computer.software_genre ,medicine.disease ,3d printer ,Personalization ,Formative assessment ,Software ,medicine ,business ,computer - Abstract
Adding tactile feedback to touch screens can improve their accessibility to blind users, but prior approaches to integrating tactile feedback with touch screens have either offered limited functionality or required extensive (and typically expensive) customization of the hardware. We introduce touchplates, carefully designed tactile guides that provide tactile feedback for touch screens in the form of physical guides that are overlaid on the screen and recognized by the underlying application. Unlike prior approaches to integrating tactile feedback with touch screens, touchplates are implemented with simple plastics and use standard touch screen software, making them versatile and inexpensive. Touchplates may be customized to suit individual users and applications, and may be produced on a laser cutter, 3D printer, or made by hand. We describe the design and implementation of touchplates, a "starter kit" of touchplates, and feedback from a formative evaluation with 9 people with visual impairments. Touchplates provide a low-cost, adaptable, and accessible method of adding tactile feedback to touch screen interfaces.
- Published
- 2013
- Full Text
- View/download PDF
11. A web-based intelligibility evaluation of sign language video transmitted at low frame rates and bitrates
- Author
-
Jessica J. Tran, Jacob O. Wobbrock, Eve A. Riskin, and Rafael Rodriguez
- Subjects
Multimedia ,American Sign Language ,Computer science ,Speech recognition ,Intelligibility (communication) ,Sign language ,Video quality ,Frame rate ,computer.software_genre ,language.human_language ,Network congestion ,Models of communication ,language ,computer ,Data compression - Abstract
Mobile sign language video conversations can become unintelligible due to high video transmission rates causing network congestion and delayed video. In an effort to understand how much sign language video quality can be sacrificed, we evaluated the perceived lower limits of intelligible sign language video transmitted at four low frame rates (1, 5, 10, and 15 frames per second [fps]) and four low fixed bitrates (15, 30, 60, and 120 kilobits per second [kbps]). We discovered an "intelligibility ceiling effect," where increasing the frame rate above 10 fps decreased perceived intelligibility, and increasing the bitrate above 60 kbps produced diminishing returns. Additional findings suggest that relaxing the recommended international video transmission rate, 25 fps at 100 kbps or higher, would still provide intelligible content while considering network resources and bandwidth consumption. As part of this work, we developed the Human Signal Intelligibility Model, a new conceptual model useful for informing evaluations of video intelligibility.
- Published
- 2013
- Full Text
- View/download PDF
12. Access lens
- Author
-
Shaun K. Kane, Brian Frey, and Jacob O. Wobbrock
- Subjects
Screen reader ,Mode (computer interface) ,Iterative design ,Multimedia ,Human–computer interaction ,Gesture recognition ,Computer science ,Augmented reality ,User interface ,computer.software_genre ,computer ,Gesture - Abstract
Gesture-based touch screen user interfaces, when designed to be accessible to blind users, can be an effective mode of interaction for those users. However, current accessible touch screen interaction techniques suffer from one serious limitation: they are only usable on devices that have been explicitly designed to support them. Access Lens is a new interaction method that uses computer vision-based gesture tracking to enable blind people to use accessible gestures on paper documents and other physical objects, such as product packages, device screens, and home appliances. This paper describes the development of Access Lens hardware and software, the iterative design of Access Lens in collaboration with blind computer users, and opportunities for future development.
- Published
- 2013
- Full Text
- View/download PDF
13. Age-related differences in performance with touchscreens compared to traditional mouse input
- Author
-
Kays Fattal, Tanya Dastyar, Leah Findlater, Jon E. Froehlich, and Jacob O. Wobbrock
- Subjects
Psychomotor learning ,medicine.medical_specialty ,Multimedia ,business.industry ,education ,Performance gap ,Audiology ,computer.software_genre ,law.invention ,body regions ,Touchscreen ,Younger adults ,law ,Age related ,medicine ,business ,computer - Abstract
Despite the apparent popularity of touchscreens for older adults, little is known about the psychomotor performance of these devices. We compared performance between older adults and younger adults on four desktop and touchscreen tasks: pointing, dragging, crossing and steering. On the touchscreen, we also examined pinch-to-zoom. Our results show that while older adults were significantly slower than younger adults in general, the touchscreen reduced this performance gap relative to the desktop and mouse. Indeed, the touchscreen resulted in a significant movement time reduction of 35% over the mouse for older adults, compared to only 16% for younger adults. Error rates also decreased.
- Published
- 2013
- Full Text
- View/download PDF
14. ContextType
- Author
-
Travis Mandel, Shwetak N. Patel, Alex Jansen, Mayank Goel, and Jacob O. Wobbrock
- Subjects
Multimedia ,Computer science ,Control (management) ,Word error rate ,Index finger ,computer.software_genre ,law.invention ,medicine.anatomical_structure ,Human–computer interaction ,law ,medicine ,Text entry ,Typing ,Language model ,Mobile device ,computer ,Virtual keyboard - Abstract
The challenge of mobile text entry is exacerbated as mobile devices are used in a number of situations and with a number of hand postures. We introduce ContextType, an adaptive text entry system that leverages information about a user's hand posture (using two thumbs, the left thumb, the right thumb, or the index finger) to improve mobile touch screen text entry. ContextType switches between various keyboard models based on hand posture inference while typing. ContextType combines the user's posture-specific touch pattern information with a language model to classify the user's touch events as pressed keys. To create our models, we collected usage patterns from 16 participants in each of the four postures. In a subsequent study with the same 16 participants comparing ContextType to a control condition, ContextType reduced total text entry error rate by 20.6%.
- Published
- 2013
- Full Text
- View/download PDF
15. Design goals for a system for enhancing AAC with personalized video
- Author
-
Katie O'Leary, Richard E. Ladner, Jacob O. Wobbrock, Jiaqi Heng, Ivan Darmansya, Eve A. Riskin, Patricia Dowden, and Charles B. Delahunt
- Subjects
Comprehension ,Augmentative and alternative communication ,Point (typography) ,Multimedia ,Action (philosophy) ,Computer science ,System concept ,computer.software_genre ,computer ,Personalization - Abstract
Enabling end-users of Augmentative and Alternative Communication (AAC) systems to add personalized video content at runtime holds promise for improving communication, but the requirements for such systems are as yet unclear. To explore this issue, we present Vid2Speech, a prototype AAC system for children with complex communication needs (CCN) that uses personalized video to enhance representations of action words. We describe three design goals that guided the integration of personalized video to enhance AAC in our early-stage prototype: 1) Providing social-temporal navigation; 2) Enhancing comprehension; and 3) Enabling customization in real time. Our system concept represents one approach to realizing these goals, however, we contribute the goals and the system as a starting point for future innovations in personalized video-based AAC.
- Published
- 2012
- Full Text
- View/download PDF
16. Personalized input
- Author
-
Leah Findlater and Jacob O. Wobbrock
- Subjects
Visual adaptation ,Touchscreen ,Multimedia ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,Human–computer interaction ,law ,Typing ,computer.software_genre ,Adaptation (computer science) ,computer ,Keyboard layout ,law.invention - Abstract
Although typing on touchscreens is slower than typing on physical keyboards, touchscreens offer a critical potential advantage: they are software-based, and, as such, the keyboard layout and classification models used to interpret key presses can dynamically adapt to suit each user's typing pattern. To explore this potential, we introduce and evaluate two novel personalized keyboard interfaces, both of which adapt their underlying key-press classification models. The first keyboard also visually adapts the location of keys while the second one always maintains a visually stable rectangular layout. A three-session user evaluation showed that the keyboard with the stable rectangular layout significantly improved typing speed compared to a control condition with no personalization. Although no similar benefit was found for the keyboard that also offered visual adaptation, overall subjective response to both new touchscreen keyboards was positive. As personalized keyboards are still an emerging area of research, we also outline a design space that includes dimensions of adaptation and key-press classification features.
- Published
- 2012
- Full Text
- View/download PDF
17. Evaluating quality and comprehension of real-time sign language video on mobile phones
- Author
-
Jaehong Chon, Jacob O. Wobbrock, Richard E. Ladner, Jessica J. Tran, Joy Kim, and Eve A. Riskin
- Subjects
American Sign Language ,Multimedia ,Computer science ,Speech recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Sign language ,Video quality ,computer.software_genre ,language.human_language ,Comprehension ,Mobile phone ,language ,PEVQ ,computer ,Subjective video quality ,Data compression - Abstract
Video and image quality are often objectively measured using peak signal-to-noise ratio (PSNR), but for sign language video, human comprehension is most important. Yet the relationship of human comprehension to PSNR has not been studied. In this survey, we determine how well PSNR matches human comprehension of sign language video. We use very low bitrates (10-60 kbps) and two low spatial resolutions (192×144 and 320×240 pixels) which may be typical of video transmission on mobile phones using 3G networks. In a national online video-based user survey of 103 respondents, we found that respondents preferred the 320×240 spatial resolution transmitted at 20 kbps and higher; this does not match what PSNR results would predict. However, when comparing perceived ease/difficulty of comprehension, we found that responses did correlate well with measured PSNR. This suggests that PSNR may not be suitable for representing subjective video quality, but can be reliable as a measure for comprehensibility of American Sign Language (ASL) video. These findings are applied to our experimental mobile phone application, MobileASL, which enables real-time sign language communication for Deaf users at low bandwidths over the U.S. 3G cellular network.
- Published
- 2011
- Full Text
- View/download PDF
18. Access overlays
- Author
-
Richard E. Ladner, Meredith Ringel Morris, Shaun K. Kane, Jacob O. Wobbrock, Daniel Wigdor, and Annuska Perkins
- Subjects
Screen reader ,business.product_category ,Blindness ,Multimedia ,Computer science ,business.industry ,Overlay ,Interactive kiosk ,medicine.disease ,computer.software_genre ,Software ,medicine ,Enhanced Data Rates for GSM Evolution ,Set (psychology) ,business ,Projection (set theory) ,computer - Abstract
Many touch screens remain inaccessible to blind users, and those approaches to providing access that do exist offer minimal support for interacting with large touch screens or spatial data. In this paper, we introduce a set of three software-based access overlays intended to improve the accessibility of large touch screen interfaces, specifically interactive tabletops. Our access overlays are called edge projection, neighborhood browsing, and touch-and-speak. In a user study, 14 blind users compared access overlays to an implementation of Apple's VoiceOver screen reader. Our results show that two of our techniques were faster than VoiceOver, that participants correctly answered more questions about the screen's layout using our techniques, and that participants overwhelmingly preferred our techniques. We developed several applications demonstrating the use of access overlays, including an accessible map kiosk and an accessible board game.
- Published
- 2011
- Full Text
- View/download PDF
19. Session details: Pointing
- Author
-
Jacob O. Wobbrock
- Subjects
Multimedia ,Computer science ,Session (computer science) ,computer.software_genre ,computer - Published
- 2011
- Full Text
- View/download PDF
20. Typing on flat glass
- Author
-
Leah Findlater, Daniel Wigdor, and Jacob O. Wobbrock
- Subjects
Multimedia ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Human–computer interaction ,Computer science ,Natural (music) ,Typing ,Flat glass ,computer.software_genre ,computer - Abstract
Touch screen surfaces large enough for ten-finger input have become increasingly popular, yet typing on touch screens pales in comparison to physical keyboards. We examine typing patterns that emerge when expert users of physical keyboards touch-type on a flat surface. Our aim is to inform future designs of touch screen keyboards, with the ultimate goal of supporting touch-typing with limited tactile feedback. To study the issues inherent to flat-glass typing, we asked 20 expert typists to enter text under three conditions: (1) with no visual keyboard and no feedback on input errors, then (2) with and (3) without a visual keyboard, but with some feedback. We analyzed touch contact points and hand contours, looking at attributes such as natural finger positioning, the spread of hits among individual keys, and the pattern of non-finger touches. We also show that expert typists exhibit spatially consistent key press distributions within an individual, which provides evidence that eyes-free touch-typing may be possible on touch surfaces and points to the role of personalization in such a solution. We conclude with implications for design.
- Published
- 2011
- Full Text
- View/download PDF
21. Usable gestures for blind people
- Author
-
Shaun K. Kane, Richard E. Ladner, and Jacob O. Wobbrock
- Subjects
User studies ,Multimedia ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Gesture recognition ,Computer science ,Still face ,computer.software_genre ,Set (psychology) ,USable ,computer ,Preference ,Gesture - Abstract
Despite growing awareness of the accessibility issues surrounding touch screen use by blind people, designers still face challenges when creating accessible touch screen interfaces. One major stumbling block is a lack of understanding about how blind people actually use touch screens. We conducted two user studies that compared how blind people and sighted people use touch screen gestures. First, we conducted a gesture elicitation study in which 10 blind and 10 sighted people invented gestures to perform common computing tasks on a tablet PC. We found that blind people have different gesture preferences than sighted people, including preferences for edge-based gestures and gestures that involve tapping virtual keys on a keyboard. Second, we conducted a performance study in which the same participants performed a set of reference gestures. We found significant differences in the speed, size, and shape of gestures performed by blind people versus those performed by sighted people. Our results suggest new design guidelines for accessible touch screen interfaces.
- Published
- 2011
- Full Text
- View/download PDF
22. From the lab to the world
- Author
-
Leah Findlater, Jacob O. Wobbrock, and Alex Jansen
- Subjects
Multimedia ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,Visual space ,Cursor (user interface) ,computer.software_genre ,computer ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present the Pointing Magnifier as a case study for understanding the issues and challenges of deploying lab-validated pointing facilitation techniques into the real world. The Pointing Magnifier works by magnifying the contents of an area cursor to allow for selection in a magnified visual and motor space. The technique has been shown in prior lab studies to be effective at reducing the need for fine pointing for motor-impaired users. We highlight key design and technical challenges in bringing the technique, and such techniques in general, from the lab to the field.
- Published
- 2011
- Full Text
- View/download PDF
23. A web-based user survey for evaluating power saving strategies for deaf users of mobileASL
- Author
-
Rafael Rodriguez, Jessica J. Tran, Richard E. Ladner, Jacob O. Wobbrock, Joy Kim, Eve A. Riskin, Sheri Yin, and Tressa W. Johnson
- Subjects
Battery (electricity) ,Deaf culture ,American Sign Language ,Multimedia ,business.industry ,Computer science ,Video quality ,computer.software_genre ,language.human_language ,Variable frame rate ,language ,Web application ,Duration (project management) ,business ,computer ,Data compression - Abstract
MobileASL is a video compression project for two-way, real-time video communication on cell phones, allowing Deaf people to communicate in the language most accessible to them, American Sign Language. Unfortunately, running MobileASL quickly depletes a full battery charge in a few hours. Previous work on MobileASL investigated a method called variable frame rate (VFR) to increase the battery duration. We expand on this previous work by creating two new power saving algorithms, variable spatial resolution (VSR), and the application of both VFR and VSR. These algorithms extend the battery life by altering the temporal and/or spatial resolutions of video transmitted on MobileASL. We found that implementing only VFR extended the battery life from 284 minutes to 307 minutes; implementing only VSR extended the battery life to 306 minutes, and implementing both VFR and VSR extended the battery life to 315 minutes. We evaluated all three algorithms by creating a linguistically accessible online survey to investigate Deaf people's perceptions of video quality when these algorithms were applied. In our survey results, we found that VFR produces perceived video choppiness and VSR produces perceived video blurriness; however, a surprising finding was that when both VFR and VSR are used together, they largely ameliorate the choppiness and blurriness perceived, i.e., they each improve the use of the other. This is a useful finding because using VFR and VSR together saves the most battery life.
- Published
- 2010
- Full Text
- View/download PDF
24. Augmenting on-screen instructions with micro-projected guides
- Author
-
Jacob O. Wobbrock, Shaun K. Kane, Daniel Avrahami, and Stephanie Rosenthal
- Subjects
Task (computing) ,Multimedia ,Computer science ,Everyday tasks ,Computer-Assisted Instruction ,Augmented reality ,Set (psychology) ,computer.software_genre ,computer - Abstract
We present a study that evaluates the effectiveness of augmenting on-screen instructions with micro-projection for manual task guidance unlike prior work, which replaced screen instructions with alternative modalities (e.g., head-mounted displays). In our study, 30 participants completed 10 trials each of 11 manual tasks chosen to represent a set of common task-components (e.g., cutting, folding) found in many everyday activities such as crafts, cooking, and hobby electronics. Fifteen participants received only on-screen instructions, and 15 received both on-screen and micro-projected instructions. In contrast to prior work, which focused only on whole tasks, our study examines the benefit of augmenting common task instructions. The augmented instructions improved participants' performance overall; however, we show that in certain cases when projected guides and physical objects visually interfered, projected elements caused increased errors. Our results demonstrate that examining effectiveness at an instruction level is both useful and necessary, and provide insight into the design of systems that help users perform everyday tasks.
- Published
- 2010
- Full Text
- View/download PDF
25. Freedom to roam
- Author
-
Chandrika Jayant, Richard E. Ladner, Shaun K. Kane, and Jacob O. Wobbrock
- Subjects
Formative assessment ,Low vision ,Blindness ,Multimedia ,Everyday tasks ,medicine ,Motor impairment ,medicine.disease ,computer.software_genre ,Psychology ,Mobile device ,computer ,Variety (cybernetics) - Abstract
Mobile devices provide people with disabilities new opportunities to act independently in the world. However, these empowering devices have their own accessibility challenges. We present a formative study that examines how people with visual and motor disabilities select, adapt, and use mobile devices in their daily lives. We interviewed 20 participants with visual and motor disabilities and asked about their current use of mobile devices, including how they select them, how they use them while away from home, and how they adapt to accessibility challenges when on the go. Following the interviews, 19 participants completed a diary study in which they recorded their experiences using mobile devices for one week. Our results show that people with visual and motor disabilities use a variety of strategies to adapt inaccessible mobile devices and successfully use them to perform everyday tasks and navigate independently. We provide guidelines for more accessible and empowering mobile device design.
- Published
- 2009
- Full Text
- View/download PDF
26. Activity analysis enabling real-time video communication on mobile phones for deaf users
- Author
-
Jaehong Chon, Jacob O. Wobbrock, Eve A. Riskin, Neva Cherniavsky, and Richard E. Ladner
- Subjects
Videotelephony ,Multimedia ,Phone ,Mobile phone ,Computer science ,Variable frame rate ,Intelligibility (communication) ,Sign language ,computer.software_genre ,Frame rate ,computer ,Data compression - Abstract
We describe our system called MobileASL for real-time video communication on the current U.S. mobile phone network. The goal of MobileASL is to enable Deaf people to communicate with Sign Language over mobile phones by compressing and transmitting sign language video in real-time on an off-the-shelf mobile phone, which has a weak processor, uses limited bandwidth, and has little battery capacity. We develop several H.264-compliant algorithms to save system resources while maintaining ASL intelligibility by focusing on the important segments of the video. We employ a dynamic skin-based region-of-interest (ROI) that encodes the skin at higher quality at the expense of the rest of the video. We also automatically recognize periods of signing versus not signing and raise and lower the frame rate accordingly, a technique we call variable frame rate (VFR).We show that our variable frame rate technique results in a 47% gain in battery life on the phone, corresponding to an extra 68 minutes of talk time. We also evaluate our system in a user study. Participants fluent in ASL engage in unconstrained conversations over mobile phones in a laboratory setting. We find that the ROI increases intelligibility and decreases guessing. VFR increases the need for signs to be repeated and the number of conversational breakdowns, but does not affect the users' perception of adopting the technology. These results show that our sign language sensitive algorithms can save considerable resources without sacrificing intelligibility.
- Published
- 2009
- Full Text
- View/download PDF
27. Exploring the design of accessible goal crossing desktop widgets
- Author
-
Jacob O. Wobbrock, Morgan Dixon, Kristen Shinohara, Eun Kyoung Choe, and Parmit K. Chilana
- Subjects
Multimedia ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,business.industry ,Pointer (user interface) ,Design elements and principles ,Input device ,Usability ,Goal crossing ,Interaction technique ,computer.software_genre ,Human–computer interaction ,business ,computer ,Gesture - Abstract
Prior work has shown that goal crossing may be a more accessible interaction technique than conventional pointing-and-clicking for motor-impaired users. Although goal crossing with pen-based input devices has been studied, pen-based designs have limited applicability on the desktop because the pen can "fly in," cross, and "fly out," whereas a persistent mouse cursor cannot. We therefore explore possible designs for accessible mouse-based goal crossing widgets that avoid triggering unwanted goals by using secondary goals, gestures, and corners and edges. We identify four design principles for accessible desktop goal crossing widgets: ease of use for motor-impaired users, safety from false selections, efficiency, and scalability.
- Published
- 2009
- Full Text
- View/download PDF
28. Slide rule
- Author
-
Jacob O. Wobbrock, Shaun K. Kane, and Jeffrey P. Bigham
- Subjects
Screen reader ,Slide rule ,Blindness ,Multimedia ,Computer science ,Speech output ,Multi-touch ,medicine.disease ,computer.software_genre ,law.invention ,law ,Human–computer interaction ,medicine ,Set (psychology) ,Mobile device ,computer - Abstract
Recent advances in touch screen technology have increased the prevalence of touch screens and have prompted a wave of new touch screen-based devices. However, touch screens are still largely inaccessible to blind users, who must adopt error-prone compensatory strategies to use them or find accessible alternatives. This inaccessibility is due to interaction techniques that require the user to visually locate objects on the screen. To address this problem, we introduce Slide Rule, a set of audio-based multi-touch interaction techniques that enable blind users to access touch screen applications. We describe the design of Slide Rule, our interaction techniques, and a user study in which 10 blind people used Slide Rule and a button-based Pocket PC screen reader. Results show that Slide Rule was significantly faster than the button-based system, and was preferred by 7 of 10 users. However, users made more errors when using Slide Rule than when using the more familiar button-based system.
- Published
- 2008
- Full Text
- View/download PDF
29. Voicedraw
- Author
-
Jacob O. Wobbrock, James A. Landay, and Susumu Harada
- Subjects
Painting ,Modality (human–computer interaction) ,Hands free ,Multimedia ,Computer science ,Human–computer interaction ,Design process ,User interface ,computer.software_genre ,computer ,Human voice ,Computer art - Abstract
We present VoiceDraw, a voice-driven drawing application for people with motor impairments that provides a way to generate free-form drawings without needing manual interaction. VoiceDraw was designed and built to investigate the potential of the human voice as a modality to bring fluid, continuous direct manipulation interaction to users who lack the use of their hands. VoiceDraw also allows us to study the issues surrounding the design of a user interface optimized for non-speech voice-based interaction. We describe the features of the VoiceDraw application, our design process, including our user-centered design sessions with a 'voice painter', and offer lessons learned that could inform future voice-based design efforts. In particular, we offer insights for mapping human voice to continuous control.
- Published
- 2007
- Full Text
- View/download PDF
30. Barrier pointing
- Author
-
Jacob O. Wobbrock, Shaun K. Kane, and Jon E. Froehlich
- Subjects
Multimedia ,Human–computer interaction ,Computer science ,Physical stability ,Enhanced Data Rates for GSM Evolution ,computer.software_genre ,Stylus ,computer ,Mobile device ,Target acquisition ,Motion (physics) - Abstract
Mobile phones and personal digital assistants (PDAs) are incredibly popular pervasive technologies. Many of these devices contain touch screens, which can present problems for users with motor impairments due to small targets and their reliance on tapping for target acquisition. In order to select a target, users must tap on the screen, an action which requires the precise motion of flying into a target and lifting without slipping. In this paper, we propose a new technique for target acquisition called barrier pointing, which leverages the elevated physical edges surrounding the screen to improve pointing accuracy. After designing a series of barrier pointing techniques, we conducted an initial study with 9 able bodied users and 9 users with motor impairments in order to discover the parameters that make barrier pointing successful. From this data, we offer an in-depth analysis of the performance of two motor impaired users for whom barrier pointing was especially beneficial. We show the importance of providing physical stability by allowing the stylus to press against the screen and its physical edge. We offer other design insights and lessons learned that can inform future attempts at leveraging the physical properties of mobile devices to improve accessibility.
- Published
- 2007
- Full Text
- View/download PDF
31. WebinSitu
- Author
-
Jeremy T. Brudvik, Jacob O. Wobbrock, Anna C. Cavender, Jeffrey P. Bigham, and Richard E. Ladner
- Subjects
Web standards ,Web 2.0 ,Multimedia ,Web development ,Computer science ,business.industry ,computer.software_genre ,World Wide Web ,Web Accessibility Initiative ,Web design ,Web page ,Web navigation ,business ,computer ,Web accessibility - Abstract
Web browsing is inefficient for blind web users because of persistent accessibility problems, but the extent of these problems and their practical effects from the perspective of the user has not been sufficiently examined. We conducted a study in situ to investigate the accessibility of the web as experienced by web users. This remote study used an advanced web proxy that leverages AJAX technology to record both the pages viewed and the actions taken by users on the web pages that they visited. Our study was conducted remotely over the period of one week, and our participants used the assistive technology and software to which they were already accustomed and had already configured according to preference. These advantages allowed us to aggregate observations of many users and to explore the practical effects on and coping strategies employed by our blind participants. Our study reflects web accessibility from the perspective of web users and describes quantitative differences in the browsing behavior of blind and sighted web users.
- Published
- 2007
- Full Text
- View/download PDF
32. Gestural text entry on multiple devices
- Author
-
Brad A. Myers and Jacob O. Wobbrock
- Subjects
Ubiquitous computing ,Multimedia ,Computer science ,Human–computer interaction ,Joystick ,Input device ,Segmentation ,Text entry ,Isometric joystick ,computer.software_genre ,computer - Abstract
We present various adaptations of the EdgeWrite unistroke text entry method that work on multiple computer input devices: styluses, touchpads, displacement and isometric joysticks, four keys or buttons, and trackballs. We argue that consistent, flexible, multi-device input is important to both accessibility and to ubiquitous computing. For accessibility, multi-device input means users can switch among devices, distributing strain and fatigue among different muscle groups. For ubiquity, it means users can "learn once, write anywhere," even as new devices emerge. By considering the accessibility and ubiquity of input techniques, we can design for both motor-impaired users and "situationally impaired" able-bodied users who are on-the-go. We discuss the requirements for such input and the challenges of multi-device text entry, such as solving the segmentation problem. This paper accompanies a demonstration of EdgeWrite on multiple devices.
- Published
- 2005
- Full Text
- View/download PDF
33. Exploring edge-based input techniques for handheld text entry
- Author
-
Scott E. Hudson, Jacob O. Wobbrock, and Brad A. Myers
- Subjects
Multimedia ,Computer science ,business.industry ,Computer access ,education ,Mobile computing ,Usability ,computer.software_genre ,Human–computer interaction ,Enhanced Data Rates for GSM Evolution ,Text entry ,User interface ,business ,Stylus ,Mobile device ,computer - Abstract
We are investigating how handheld devices like Palm PDAs and PocketPCs can be used as assistive technologies for computer access by people with motor impairments such as Muscular Dystrophy and Cerebral Palsy. As part of this research, we are developing new input techniques for handheld text entry. People with motor impairments suffer from symptoms that affect their ability to use conventional text entry methods. One symptom is a lack of stability in stylus movements caused by tremor or spasm. In an effort to create a more stable means of text entry, we are researching how to leverage elevated physical edges in our development of new text entry techniques. We present three edge-based techniques: Edge Keyboards, CornerSlide, and EdgeWrite.
- Published
- 2004
- Full Text
- View/download PDF
34. Joystick text entry with date stamp, selection keyboard, and EdgeWrite
- Author
-
Htet Htet Aung, Brad A. Myers, and Jacob O. Wobbrock
- Subjects
Multimedia ,Computer science ,Joystick ,Input device ,Text entry ,computer.software_genre ,computer ,Selection (genetic algorithm) - Abstract
MOTIVATION Mobile phones and game consoles are two domains in which a good joystick text entry method would be valuable, since these devices lack conventional keyboards. Joysticks have considerable tenure as input devices, but few text entry methods have been developed for them. Common joystick text entry methods, like date stamp and selection keyboard, are selection-based: they require screen real-estate to display options, are difficult to customize, cannot be used without looking, and are slow. A gestural joystick method, on the other hand, could alleviate these limitations in exchange for the need to learn letter-forms. If the letter-forms were easy to learn, such a technique could have significant advantages over current methods.
- Published
- 2004
- Full Text
- View/download PDF
35. The benefits of physical edges in gesture-making
- Author
-
Jacob O. Wobbrock
- Subjects
Empirical research ,Multimedia ,Human–computer interaction ,Computer science ,Stability (learning theory) ,Leverage (statistics) ,Alphabet ,Stylus ,computer.software_genre ,computer ,Gesture - Abstract
People with motor impairments often cannot use a keyboard or a mouse. Our previous work showed that a handheld device, connected to a PC, could be effective for computer access for some people with motor impairments. But text entry was slow, and the popular unistroke methods like Graffiti proved difficult for some people with motor control problems. We are now investigating how physical edges can provide stability for stylus gestures, and we are designing a unistroke alphabet whose letter-forms are defined along the edges of a small plastic square hole. This paper presents data on the benefits of physical edges in making gestures. It then describes EdgeWrite, a new unistroke alphabet designed to leverage physical edges for greater stability in text entry.
- Published
- 2003
- Full Text
- View/download PDF
36. Using handhelds to help people with motor impairments
- Author
-
Jacob O. Wobbrock, Robert C. Miller, Brad A. Myers, Brian Yeung, Sunny Yang, and Jeffrey Nichols
- Subjects
Multimedia ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,business.industry ,Computer science ,education ,Gross motor skill ,Input device ,computer.software_genre ,Software ,Human–computer interaction ,business ,Stylus ,Mobile device ,computer ,Fine motor - Abstract
People with Muscular Dystrophy (MD) and certain other muscular and nervous system disorders lose their gross motor control while retaining fine motor control. The result is that they lose the ability to move their wrists and arms, and therefore their ability to operate a mouse and keyboard. However, they can often still use their fingers to control a pencil or stylus, and thus can use a handheld computer such as a Palm. We have developed software that allows the handheld to substitute for the mouse and keyboard of a PC, and tested it with four people (ages 10, 12, 27 and 53) with MD. The 12-year old had lost the ability to use a mouse and keyboard, but with our software, he was able to use the Palm to access email, the web and computer games. The 27-year-old reported that he found the Palm so much better that he was using it full-time instead of a keyboard and mouse. The other two subjects said that our software was much less tiring than using the conventional input devices, and enabled them to use computers for longer periods. We report the results of these case studies, and the adaptations made to our software for people with disabilities.
- Published
- 2002
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.