34 results on '"Jacob O. Wobbrock"'
Search Results
2. 'I Am Iron Man'
- Author
-
Meredith Ringel Morris, Jacob O. Wobbrock, and Abdullah Ali
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Learnability ,05 social sciences ,020207 software engineering ,Context (language use) ,Mindset ,02 engineering and technology ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Control (linguistics) ,Psychology ,Priming (psychology) ,050107 human factors ,Cognitive psychology ,Gesture - Abstract
Priming is used as a way of increasing the diversity of proposals in end-user elicitation studies, but priming has not been investigated thoroughly in this context. We conduct a distributed end-user elicitation study with 167 participants, which had three priming groups: a no-priming control group, sci-fi priming, and a creative mindset group. We evaluated the gestures proposed by these groups in a distributed learnability and memorability study with 18 participants. We found that the user-elicited gestures from the sci-fi group were significantly faster to learn, requiring an average of 1.22 viewings to learn compared to 1.60 viewings required to learn the control gestures, and 1.56 viewings to learn the gestures elicited from the creative mindset group. In addition, both primed gesture groups had higher memorability with 80% of the sci-fi-primed gestures and 73% of the creative mindset group gestures were recalled correctly after one week without practice compared to 43% of the control group gestures.
- Published
- 2021
- Full Text
- View/download PDF
3. Understanding Blind Screen-Reader Users’ Experiences of Digital Artboards
- Author
-
Anastasia Schaadhardt, Jacob O. Wobbrock, and Alexis Hiniker
- Subjects
Screen reader ,business.industry ,Computer science ,media_common.quotation_subject ,05 social sciences ,020207 software engineering ,Cognition ,Usability ,02 engineering and technology ,Creativity ,Object (philosophy) ,Constant (computer programming) ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,0501 psychology and cognitive sciences ,State (computer science) ,business ,050107 human factors ,media_common - Abstract
Two-dimensional canvases are the core components of many digital productivity and creativity tools, with “artboards” containing objects rather than pixels. Unfortunately, the contents of artboards remain largely inaccessible to blind users relying on screen-readers, but the precise problems are not well understood. This study sought to understand how blind screen-reader users interact with artboards. Specifically, we conducted contextual interviews, observations, and task-based usability studies with 15 blind participants to understand their experiences of artboards found in Microsoft PowerPoint, Apple Keynote, and Google Slides. Participants expressed that the inaccessibility of these artboards contributes to significant educational and professional barriers. We found that the key problems faced were: (1) high cognitive loads from a lack of feedback about artboard contents and object state; (2) difficulty determining relationships among artboard objects; and (3) constant uncertainty about whether object manipulations were successful. We offer design remedies that improve feedback for object state, relationships, and manipulations.
- Published
- 2021
- Full Text
- View/download PDF
4. Ability-based design
- Author
-
Gregg C. Vanderheiden, Jacob O. Wobbrock, Shaun K. Kane, and Krzysztof Z. Gajos
- Subjects
General Computer Science ,05 social sciences ,0202 electrical engineering, electronic engineering, information engineering ,020207 software engineering ,0501 psychology and cognitive sciences ,02 engineering and technology ,050107 human factors - Abstract
By focusing on users' abilities rather than disabilities, designers can create interactive systems better matched to those abilities.
- Published
- 2018
- Full Text
- View/download PDF
5. Tenets for Social Accessibility
- Author
-
Cynthia L. Bennett, Kristen Shinohara, Wanda Pratt, and Jacob O. Wobbrock
- Subjects
business.industry ,05 social sciences ,020207 software engineering ,Disabled people ,02 engineering and technology ,Public relations ,Computer Science Applications ,Human-Computer Interaction ,0202 electrical engineering, electronic engineering, information engineering ,Mainstream ,0501 psychology and cognitive sciences ,Psychology ,Engineering design process ,business ,050107 human factors ,Design technology - Abstract
Despite years of addressing disability in technology design and advocating user-centered design practices, popular mainstream technologies remain largely inaccessible for people with disabilities. We conducted a design course study investigating how student designers regard disability and explored how designing for multiple disabled and nondisabled users encouraged students to think about accessibility in the design process. Across two university course offerings one year apart, we examined how students focused on a design project while learning user-centered design concepts and techniques, working with people with and without disabilities throughout the project. In addition, we compared how students incorporated disability-focused design approaches within a classroom setting. We found that designing for multiple stakeholders with and without disabilities expanded student understanding of accessible design by demonstrating that people with the same disability could have diverse needs and by aligning such needs with those of nondisabled users. We also found that using approaches targeted toward designing for people with disabilities complemented interactions with users, particularly with regard to managing varying abilities across users, or incorporating social aspects. Our findings contribute to an understanding about how we might incur change in design practice by working with multiple stakeholders with and without disabilities whenever possible. We refined Design for Social Accessibility by incorporating these findings into three tenets emphasizing: (1) design for disability ought to incorporate users with and without disabilities, (2) design should address functional and social factors simultaneously, and (3) design should include tools to spur consideration of social factors in accessible design.
- Published
- 2018
- Full Text
- View/download PDF
6. Demonstration of GestureCalc
- Author
-
Philip Garrison, Jacob O. Wobbrock, Leah Perlmutter, Richard E. Ladner, Justin Petelka, Bindita Chaudhuri, and James Fogarty
- Subjects
Screen reader ,Computer science ,Character (computing) ,05 social sciences ,020207 software engineering ,02 engineering and technology ,law.invention ,Calculator ,Human–computer interaction ,law ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Keypad ,0501 psychology and cognitive sciences ,Audio feedback ,Mobile device ,050107 human factors ,Gesture - Abstract
Keypad-based character input in existing digital calculator applications on touch screen devices requires precise, targeted key presses that are time-consuming and error-prone for many screen reader users. We demonstrate GestureCalc, a digital calculator that uses target-free gestures for arithmetic tasks. It allows eyes-free target-less input of digits and operations through taps and directional swipes with one to three fingers, guided by minimal audio feedback. A study of the effectiveness of GestureCalc for screen reader users appears in a full paper by the authors at this conference.
- Published
- 2019
- Full Text
- View/download PDF
7. Just Ask Me
- Author
-
Leah Findlater, Rachel L. Franz, and Jacob O. Wobbrock
- Subjects
medicine.medical_specialty ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Audiology ,Target acquisition ,law.invention ,Task (project management) ,Touchscreen ,law ,Ask price ,Younger adults ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,0501 psychology and cognitive sciences ,Psychology ,050107 human factors - Abstract
To understand whether people can identify their optimal touchscreen target sizes, we asked older and younger adults to identify optimal target sizes on a questionnaire and compared these chosen sizes to performance on a target acquisition task. We found that older individuals (60+) were better than younger adults at choosing their optimal target sizes. In fact, younger adults underestimated the smallest target size they could accurately touch by almost 6mm. This study suggests that older adults may be able to better configure target size settings than younger adults.
- Published
- 2019
- Full Text
- View/download PDF
8. Perception and Adoption of Mobile Accessibility Features by Older Adults Experiencing Ability Changes
- Author
-
Leah Findlater, Jacob O. Wobbrock, Rachel L. Franz, and Yi Cheng
- Subjects
Affect perception ,media_common.quotation_subject ,05 social sciences ,Applied psychology ,020207 software engineering ,Cognition ,02 engineering and technology ,Perception ,0202 electrical engineering, electronic engineering, information engineering ,Interview study ,0501 psychology and cognitive sciences ,sense organs ,Android (operating system) ,Psychology ,Mobile device ,050107 human factors ,media_common - Abstract
To investigate how older adults perceive ability changes (e.g., sensory, physical, cognitive) and how attitudes toward those changes affect perception and adoption of built-in mobile accessibility features (such as those found on Apple iOS and Google Android smartphones and tablets), we conducted an interview study with 14 older adults and six of their family members. Accessibility features were difficult for participants to find and configure, which were issues compounded by a reluctance to use trial-and-error. At 4-6 weeks after the interview, however, some participants had adopted new accessibility features that we had showed them, suggesting a willingness to adopt once features are made visible. The older adults who did already use accessibility features had experienced a disability earlier in life, suggesting that those experiencing progressive ability changes later in life might not be as aware of accessibility features, or might not have the know-how to adapt technologies to their changing needs. Our findings provide support for creating technologies that can detect older adults' abilities and recommend or enact interface changes to match.
- Published
- 2019
- Full Text
- View/download PDF
9. Beyond the Input Stream
- Author
-
Jacob O. Wobbrock and Mingrui Ray Zhang
- Subjects
Ground truth ,business.product_category ,business.industry ,Computer science ,05 social sciences ,02 engineering and technology ,computer.software_genre ,Cursor (databases) ,020204 information systems ,Laptop ,0202 electrical engineering, electronic engineering, information engineering ,Snapshot (computer storage) ,0501 psychology and cognitive sciences ,Text entry ,Artificial intelligence ,business ,Words per minute ,computer ,050107 human factors ,Natural language processing ,Gesture - Abstract
Method-independent text entry evaluation tools are often used to conduct text entry experiments and compute performance metrics, like words per minute and error rates. The input stream paradigm of Soukoreff & MacKenzie (2001, 2003) still remains prevalent, which presents a string for transcription and uses a strictly serial character representation for encoding the text entry process. Although an advance over prior paradigms, the input stream paradigm is unable to support many modern text entry features. To address these limitations, we present transcription sequences: for each new input, a snapshot of the entire transcribed string unto that point is captured. By comparing adjacent strings within a transcription sequence, we can compute all prior metrics, reduce artificial constraints on text entry evaluations, and introduce new metrics. We conducted a study with 18 participants who typed 1620 phrases using a laptop keyboard, on-screen keyboard, and smartphone keyboard using features such as auto-correction, word prediction, and copy/paste. We also evaluated non-keyboard methods Dasher, gesture typing, and T9. Our results show that modern text entry methods and features can be accommodated, prior metrics can be correctly computed, and new metrics can reveal insights. We validated our algorithms using ground truth based on cursor positioning, confirming 100% accuracy. We also provide a new tool, TextTest++, to facilitate web-based evaluations.
- Published
- 2019
- Full Text
- View/download PDF
10. Isolating the Effects of Web Page Visual Appearance on the Perceived Credibility of Online News among College Students
- Author
-
Marijn A. Burger, Anya K. Hsu, Jacob O. Wobbrock, and Michael J. Magee
- Subjects
Civil discourse ,business.industry ,05 social sciences ,Internet privacy ,Perceived credibility ,020207 software engineering ,02 engineering and technology ,Visual appearance ,Presentational and representational acting ,Credibility ,Web page ,Font ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Laboratory experiment ,business ,Psychology ,050107 human factors - Abstract
Online news sources have transformed civic discourse, and much has been made of their credibility. Although web page credibility has been investigated generally, most work has focused on the credibility of web page content. In this work, we study the isolated appearance of news-like web pages. Specifically, we report on a laboratory experiment involving 31 college students rating the perceived credibility of news-like web pages devoid of meaningful content. These pages contain only "lorem ipsum" text, indistinct videos and images, non-functional links, and various font settings. Our findings show that perceived credibility is indeed affected by some purely presentational factors. Specifically, video presence increased credibility, while large fonts and having no images reduced credibility. Having a few, but not too many, images increased credibility for short articles, especially in the presence of large fonts. We also conducted follow-up interviews, which revealed that participants noticed images, videos, and font sizes when making credibility judgments, corroborating our quantitative experimental results.
- Published
- 2019
- Full Text
- View/download PDF
11. Cluster Touch
- Author
-
Jacob O. Wobbrock and Martez E. Mott
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,Human–computer interaction ,05 social sciences ,0202 electrical engineering, electronic engineering, information engineering ,020207 software engineering ,0501 psychology and cognitive sciences ,02 engineering and technology ,Situational ethics ,050107 human factors - Abstract
We present Cluster Touch, a combined user-independent and user-specific touch offset model that improves the accuracy of touch input on smartphones for people with motor impairments, and for people experiencing situational impairments while walking. Cluster Touch combines touch examples from multiple users to create a shared user-independent touch model, which is then updated with touch examples provided by an individual user to make it user-specific. Owing to this combination, Cluster Touch allows people to quickly improve the accuracy of their smartphones by providing only 20 touch examples. In a user study with 12 people with motor impairments and 12 people without motor impairments, but who were walking, Cluster Touch improved touch accuracy by 14.65% for the former group and 6.81% for the latter group over the native touch sensor. Furthermore, in an offline analysis of existing mobile interfaces, Cluster Touch improved touch accuracy by 8.21% and 4.84% over the native touch sensor for the two user groups, respectively.
- Published
- 2019
- Full Text
- View/download PDF
12. Text Entry Throughput
- Author
-
Jacob O. Wobbrock, Shumin Zhai, and Mingrui Ray Zhang
- Subjects
Measure (data warehouse) ,business.product_category ,business.industry ,Computer science ,05 social sciences ,020207 software engineering ,Usability ,02 engineering and technology ,Machine learning ,computer.software_genre ,Laptop ,Metric (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Text entry ,Artificial intelligence ,business ,Performance metric ,computer ,Throughput (business) ,050107 human factors - Abstract
Human-computer input performance inherently involves speed-accuracy tradeoffs---the faster users act, the more inaccurate those actions are. Therefore, comparing speeds and accuracies separately can result in ambiguous outcomes: Does a fast but inaccurate technique perform better or worse overall than a slow but accurate one? For pointing, speed and accuracy has been unified for over 60 years as throughput (bits/s) (Crossman 1957, Welford 1968), but to date, no similar metric has been established for text entry. In this paper, we introduce a text entry method-independent throughput metric based on Shannon information theory (1948). To explore the practical usability of the metric, we conducted an experiment in which 16 participants typed with a laptop keyboard using different cognitive sets, i.e., speed-accuracy biases. Our results show that as a performance metric, text entry throughput remains relatively stable under different speed-accuracy conditions. We also evaluated a smartphone keyboard with 12 participants, finding that throughput varied least compared to other text entry metrics. This work allows researchers to characterize text entry performance with a single unified measure of input efficiency.
- Published
- 2019
- Full Text
- View/download PDF
13. Crowdlicit
- Author
-
Jacob O. Wobbrock, Abdullah Ali, and Meredith Ringel Morris
- Subjects
Information retrieval ,Computer science ,End user ,business.industry ,media_common.quotation_subject ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Crowdsourcing ,Representativeness heuristic ,Identification (information) ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Quality (business) ,business ,Set (psychology) ,050107 human factors ,media_common ,Diversity (politics) - Abstract
End-user elicitation studies are a popular design method. Currently, such studies are usually confined to a lab, limiting the number and diversity of participants, and therefore the representativeness of their results. Furthermore, the quality of the results from such studies generally lacks any formal means of evaluation. In this paper, we address some of the limitations of elicitation studies through the creation of the Crowdlicit system along with the introduction of end-user identification studies, which are the reverse of elicitation studies. Crowdlicit is a new web-based system that enables researchers to conduct online and in-lab elicitation and identification studies. We used Crowdlicit to run a crowd-powered elicitation study based on Morris's "Web on the Wall" study (2012) with 78 participants, arriving at a set of symbols that included six new symbols different from Morris's. We evaluated the effectiveness of 49 symbols (43 from Morris and six from Crowdlicit) by conducting a crowd-powered identification study. We show that the Crowdlicit elicitation study resulted in a set of symbols that was significantly more identifiable than Morris's.
- Published
- 2019
- Full Text
- View/download PDF
14. Anachronism by Design: Understanding Young Adults’ Perceptions of Computer Iconography
- Author
-
Jacob O. Wobbrock, Erin McAweeney, and Abdullah Ali
- Subjects
media_common.quotation_subject ,Human Factors and Ergonomics ,02 engineering and technology ,Education ,World Wide Web ,Perception ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Iconography ,Set (psychology) ,050107 human factors ,computer.programming_language ,media_common ,05 social sciences ,General Engineering ,020207 software engineering ,Icon design ,User interface design ,Human-Computer Interaction ,Identification (information) ,Hardware and Architecture ,Personal experience ,Icon ,Psychology ,computer ,Software - Abstract
Computer iconography in desktop operating systems and applications has evolved in style but, in many cases, not in substance for decades. For example, in many applications, a 3.5" floppy diskette icon still represents the “Save” function. But many of today's young adult computer users grew up without direct physical experience of floppy diskettes and many of the other objects that are represented by enduring legacy icons. In this article, we describe a multi-part study conducted to gain an understanding of young adults’ perceptions of computer iconography, and to possibly update that iconography based on young adults’ current mental models. To carry out this work, we gathered a set of 39 icons found on common desktop operating systems and applications and also recruited 30 young adults aged 18–22. In the first part of our study, an end-user elicitation study, we asked participants to propose sketches of icons they deemed most appropriate to trigger the functions associated with our selected icons. We elicited a total of 3,590 individual icon sketches and grouped these into a set of participant-generated icons. In the second part of our study, an end-user identification study, we showed participants the 39 icons from current operating systems and asked them to name the computing functions triggered when those icons were selected. We also asked them to identify the real-world objects, if any, those icons represented, and to tell us about their personal experiences with those objects. Finally, we conducted a second identification study with 60 new participants from Amazon's Mechanical Turk on the set of participant-generated icons we obtained from the first part of our study to see how recognizable our young adults’ sketched icons were. Our study results highlight 20 anachronistic icons currently found on desktop operating systems in need of redesign. Our results also show that with increased icon production, the chances for anachronism significantly decrease, supporting the “production principle” in elicitation studies. Furthermore, our results include an updated set of icons derived from our young adult participants. This work contributes an approach to using end-user elicitation to understand users, user interface design, and specifically, icon design.
- Published
- 2021
- Full Text
- View/download PDF
15. Research contributions in human-computer interaction
- Author
-
Julie A. Kientz and Jacob O. Wobbrock
- Subjects
Human-Computer Interaction ,Computer science ,Human–computer interaction ,05 social sciences ,0202 electrical engineering, electronic engineering, information engineering ,020207 software engineering ,0501 psychology and cognitive sciences ,02 engineering and technology ,050107 human factors - Published
- 2016
- Full Text
- View/download PDF
16. Crowdsourcing Similarity Judgments for Agreement Analysis in End-User Elicitation Studies
- Author
-
Jacob O. Wobbrock, Abdullah Ali, and Meredith Ringel Morris
- Subjects
Computer science ,business.industry ,End user ,media_common.quotation_subject ,05 social sciences ,Agreement analysis ,020207 software engineering ,02 engineering and technology ,Voice command device ,computer.software_genre ,Crowdsourcing ,Human judgment ,Similarity (network science) ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Artificial intelligence ,business ,Cluster analysis ,Function (engineering) ,computer ,050107 human factors ,Natural language processing ,media_common - Abstract
End-user elicitation studies are a popular design method, but their data require substantial time and effort to analyze. In this paper, we present Crowdsensus, a crowd-powered tool that enables researchers to efficiently analyze the results of elicitation studies using subjective human judgment and automatic clustering algorithms. In addition to our own analysis, we asked six expert researchers with experience running and analyzing elicitation studies to analyze an end-user elicitation dataset of 10 functions for operating a web-browser, each with 43 voice commands elicited from end-users for a total of 430 voice commands. We used Crowdsensus to gather similarity judgments of these same 430 commands from 410 online crowd workers. The crowd outperformed the experts by arriving at the same results for seven of eight functions and resolving a function where the experts failed to agree. Also, using Crowdsensus was about four times faster than using experts.
- Published
- 2018
- Full Text
- View/download PDF
17. Examining Image-Based Button Labeling for Accessibility in Android Apps through Large-Scale Analysis
- Author
-
Xiaoyi Zhang, James Fogarty, Jacob O. Wobbrock, and Anne Spencer Ross
- Subjects
education.field_of_study ,Information retrieval ,Computer science ,05 social sciences ,Population ,Mobile apps ,020207 software engineering ,02 engineering and technology ,behavioral disciplines and activities ,GeneralLiterature_MISCELLANEOUS ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Clickable ,Android (operating system) ,education ,050107 human factors ,Image based - Abstract
We conduct the first large-scale analysis of the accessibility of mobile apps, examining what unique insights this can provide into the state of mobile app accessibility. We analyzed 5,753 free Android apps for label-based accessibility barriers in three classes of image-based buttons: Clickable Images, Image Buttons, and Floating Action Buttons. An epidemiology-inspired framework was used to structure the investigation. The population of free Android apps was assessed for label-based inaccessible button diseases. Three determinants of the disease were considered: missing labels, duplicate labels, and uninformative labels. The prevalence, or frequency of occurrences of barriers, was examined in apps and in classes of image-based buttons. In the app analysis, 35.9% of analyzed apps had 90% or more of their assessed image-based buttons labeled, 45.9% had less than 10% of assessed image-based buttons labeled, and the remaining apps were relatively uniformly distributed along the proportion of elements that were labeled. In the class analysis, 92.0% of Floating Action Buttons were found to have missing labels, compared to 54.7% of Image Buttons and 86.3% of Clickable Images. We discuss how these accessibility barriers are addressed in existing treatments, including accessibility development guidelines.
- Published
- 2018
- Full Text
- View/download PDF
18. Incorporating Social Factors in Accessible Design
- Author
-
Kristen Shinohara, Jacob O. Wobbrock, and Wanda Pratt
- Subjects
Knowledge management ,Computer science ,business.industry ,Design activities ,05 social sciences ,0202 electrical engineering, electronic engineering, information engineering ,020207 software engineering ,0501 psychology and cognitive sciences ,Disabled people ,02 engineering and technology ,business ,050107 human factors ,User-centered design - Abstract
Personal technologies are rarely designed to be accessible to disabled people, partly due to the perceived challenge of including disability in design. Through design workshops, we addressed this challenge by infusing user-centered design activities with Design for Social Accessibility-a perspective emphasizing social aspects of accessibility-to investigate how professional designers can leverage social factors to include accessibility in design. We focused on how professional designers incorporated Design for Social Accessibility's three tenets: (1) to work with users with and without visual impairments; (2) to consider social and functional factors; (3) to employ tools-a framework and method cards-to raise awareness and prompt reflection on social aspects toward accessible design. We then interviewed designers about their workshop experiences. We found DSA to be an effective set of tools and strategies incorporating social/functional and non/disabled perspectives that helped designers create accessible design.
- Published
- 2018
- Full Text
- View/download PDF
19. RainCheck
- Author
-
Ying-Chao Tung, Jacob O. Wobbrock, Isaac Zinda, and Mayank Goel
- Subjects
Hazard (logic) ,business.industry ,Computer science ,Conductive materials ,Capacitive sensing ,05 social sciences ,020207 software engineering ,Usability ,02 engineering and technology ,Interference (wave propagation) ,Rainwater harvesting ,law.invention ,Touchscreen ,law ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Computer vision ,Artificial intelligence ,business ,050107 human factors ,Gesture - Abstract
Modern smartphones are built with capacitive-sensing touchscreens, which can detect anything that is conductive or has a dielectric differential with air. The human finger is an example of such a dielectric, and works wonderfully with such touchscreens. However, touch interactions are disrupted by raindrops, water smear, and wet fingers because capacitive touchscreens cannot distinguish finger touches from other conductive materials. When users' screens get wet, the screen's usability is significantly reduced. RainCheck addresses this hazard by filtering out potential touch points caused by water to differentiate fingertips from raindrops and water smear, adapting in real-time to restore successful interaction to the user. Specifically, RainCheck uses the low-level raw sensor data from touchscreen drivers and employs precise selection techniques to resolve water-fingertip ambiguity. Our study shows that RainCheck improves gesture accuracy by 75.7%, touch accuracy by 47.9%, and target selection time by 80.0%, making it a successful remedy to interference caused by rain and other water.
- Published
- 2018
- Full Text
- View/download PDF
20. $Q
- Author
-
Jacob O. Wobbrock, Lisa Anthony, and Radu-Daniel Vatavu
- Subjects
Source lines of code ,Computer science ,Computation ,05 social sciences ,Wearable computer ,020207 software engineering ,02 engineering and technology ,Computer engineering ,Gesture recognition ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,0501 psychology and cognitive sciences ,Mobile device ,050107 human factors ,Invariant (computer science) ,Gesture - Abstract
We introduce $Q, a super-quick, articulation-invariant point-cloud stroke-gesture recognizer for mobile, wearable, and embedded devices with low computing resources. $Q ran up to 142X faster than its predecessor $P in our benchmark evaluations on several mobile CPUs, and executed in less than 3% of $P's computations without any accuracy loss. In our most extreme evaluation demanding over 99% user-independent recognition accuracy, $P required 9.4s to run a single classification, while $Q completed in just 191ms (a 49X speed-up) on a Cortex-A7, one of the most widespread CPUs on the mobile market. $Q was even faster on a low-end 600-MHz processor, on which it executed in only 0.7% of $P's computations (a 142X speed-up), reducing classification time from two minutes to less than one second. $Q is the next major step for the "$-family" of gesture recognizers: articulation-invariant, extremely fast, accurate, and implementable on top of $P with just 30 extra lines of code.
- Published
- 2018
- Full Text
- View/download PDF
21. 'Suddenly, we got to become therapists for each other'
- Author
-
Wanda Pratt, Stephen M. Schueller, Jacob O. Wobbrock, and Kathleen O'Leary
- Subjects
Emotional support ,020205 medical informatics ,05 social sciences ,Applied psychology ,ComputingMilieux_PERSONALCOMPUTING ,02 engineering and technology ,Peer support ,Talk therapy ,Mental health ,Scale (social sciences) ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Anxiety ,Mental health care ,0501 psychology and cognitive sciences ,medicine.symptom ,Psychology ,050107 human factors ,User feedback - Abstract
Talk therapy is a common, effective, and desirable form of mental health treatment. Yet, it is inaccessible to many people. Enabling peers to chat online using effective principles of talk therapy could help scale this form of mental health care. To understand how such chats could be designed, we conducted a two-week field experiment with 40 people experiencing mental illnesses comparing two types of online chats-chats guided by prompts, and unguided chats. Results show that anxiety was significantly reduced from pre-test to post-test. User feedback revealed that guided chats provided solutions to problems and new perspectives, and were perceived as "deep," while unguided chats offered personal connection on shared experiences and were experienced as "smooth." We contribute the design of an online guided chat tool and insights into the design of peer support chat systems that guide users to initiate, maintain, and reciprocate emotional support.
- Published
- 2018
- Full Text
- View/download PDF
22. Drunk User Interfaces
- Author
-
Alex Mariakakis, Jacob O. Wobbrock, Shwetak N. Patel, and Sayna Parsi
- Subjects
business.product_category ,Computer science ,05 social sciences ,Applied psychology ,Law enforcement ,020207 software engineering ,Cognition ,Alcohol ,02 engineering and technology ,chemistry.chemical_compound ,chemistry ,Blood alcohol ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,User interface ,business ,050107 human factors ,Breathalyzer - Abstract
Breathalyzers, the standard quantitative method for assessing inebriation, are primarily owned by law enforcement and used only after a potentially inebriated individual is caught driving. However, not everyone has access to such specialized hardware. We present drunk user interfaces: smartphone user interfaces that measure how alcohol affects a person's motor coordination and cognition using performance metrics and sensor data. We examine five drunk user interfaces and combine them to form the "DUI app". DUI uses machine learning models trained on human performance metrics and sensor data to estimate a person's blood alcohol level (BAL). We evaluated DUI on 14 individuals in a week-long longitudinal study wherein each participant used DUI at various BALs. We found that with a global model that accounts for user-specific learning, DUI can estimate a person's BAL with an absolute mean error of 0.005% ± 0.007% and a Pearson's correlation coefficient of 0.96 with breathalyzer measurements.
- Published
- 2018
- Full Text
- View/download PDF
23. Self-Conscious or Self-Confident? A Diary Study Conceptualizing the Social Accessibility of Assistive Technology
- Author
-
Jacob O. Wobbrock and Kristen Shinohara
- Subjects
Sociotechnical system ,Product design ,business.industry ,media_common.quotation_subject ,05 social sciences ,Internet privacy ,020207 software engineering ,Functional requirement ,02 engineering and technology ,Interaction design ,Computer Science Applications ,Human-Computer Interaction ,Feeling ,Human–computer interaction ,Perception ,0202 electrical engineering, electronic engineering, information engineering ,Accessibility ,0501 psychology and cognitive sciences ,Psychology ,business ,050107 human factors ,Web accessibility ,media_common - Abstract
With the recent influx of smartphones, tablets, and wearables such as watches and glasses, personal interactive device use is increasingly visible and commonplace in public and social spaces. Assistive Technologies (ATs) used by people with disabilities are observable to others and, as a result, can affect how AT users are perceived. This raises the possibility that what we call “social accessibility” may be as important as “functional accessibility” when considering ATs. But, to date, ATs have almost exclusively been regarded as functional aids. For example, ATs are defined by the Technical Assistance to the States Act as technologies that are “used to increase, maintain or improve functional capabilities of individuals with disabilities.” To investigate perceptions and self-perceptions of AT users, we conducted a diary study of two groups of participants: people with disabilities and people without disabilities. Our goal was to explore the types of interactions and perceptions that arise around AT use in social and public spaces. During our 4-week study, participants with sensory disabilities wrote about feeling either self-conscious or self-confident when using an assistive device in a social or public situation. Meanwhile, participants without disabilities were prompted to record their reactions and feelings whenever they saw ATs used in social or public situations. We found that AT form and function does influence social interactions by impacting self-efficacy and self-confidence. When the design of form or function is poor, or when inequality between technological accessibility exists, social inclusion is negatively affected, as are perceptions of ability. We contribute a definition for the “social accessibility” of ATs and subsequently offer Design for Social Accessibility (DSA) as a holistic design stance focused on balancing an AT user's sociotechnical identity with functional requirements.
- Published
- 2016
- Full Text
- View/download PDF
24. Epidemiology as a Framework for Large-Scale Mobile Application Accessibility Assessment
- Author
-
Jacob O. Wobbrock, Xiaoyi Zhang, Anne Spencer Ross, and James Fogarty
- Subjects
education.field_of_study ,Computer science ,05 social sciences ,Population ,Mobile computing ,020207 software engineering ,02 engineering and technology ,Cognitive reframing ,App store ,Health indicator ,Terminology ,Stratified sampling ,World Wide Web ,Conceptual framework ,mental disorders ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,education ,050107 human factors - Abstract
Mobile accessibility is often a property considered at the level of a single mobile application (app), but rarely on a larger scale of the entire app "ecosystem," such as all apps in an app store, their companies, developers, and user influences. We present a novel conceptual framework for the accessibility of mobile apps inspired by epidemiology. It considers apps within their ecosystems, over time, and at a population level. Under this metaphor, "inaccessibility" is a set of diseases that can be viewed through an epidemiological lens. Accordingly, our framework puts forth notions like risk and protective factors, prevalence, and health indicators found within a population of apps. This new framing offers terminology, motivation, and techniques to reframe how we approach and measure app accessibility. It establishes how app accessibility can benefit from multi-factor, longitudinal, and population-based analyses. Our epidemiology-inspired conceptual framework is the main contribution of this work, intended to provoke thought and inspire new work enhancing app accessibility at a systemic level. In a preliminary exercising of our framework, we perform an analysis of the prevalence of common determinants or accessibility barriers. We assess the health of a stratified sample of 100 popular Android apps using Google's Accessibility Scanner. We find that 100% of apps have at least one of nine accessibility errors and examine which errors are most common. A preliminary analysis of the frequency of co-occurrences of multiple errors in a single app is also presented. We find 72% of apps have five or six errors, suggesting an interaction among different errors or an underlying influence.
- Published
- 2017
- Full Text
- View/download PDF
25. SIGCHI Social Impact Award Talk -- Ability-Based Design
- Author
-
Jacob O. Wobbrock
- Subjects
Focus (computing) ,Computer science ,business.industry ,Self ,media_common.quotation_subject ,05 social sciences ,Perspective (graphical) ,Internet privacy ,050301 education ,Context (language use) ,Space (commercial competition) ,Human–computer interaction ,Situated ,0501 psychology and cognitive sciences ,Affect (linguistics) ,Function (engineering) ,business ,0503 education ,050107 human factors ,media_common - Abstract
The term "disability" connotes an absence of ability, but is like saying "dis-money" or "dis-height." All living people have some abilities [2]. Unfortunately, history is filled with examples of a focus on dis-ability, on what is missing, and on ensuing attempts to replace lost function to make people match a rigid world. Although often well intended, such a focus assumes humans must be adapted, and that interfaces, devices, and environments get to remain as they are. At the same time, our built things embody numerous "ability assumptions" imparted by their designers, and yet our built things remain unaware of their users' abilities. They also remain unaware of the situations their users are in, or how those situations affect their users' abilities [2,3]. An important shift in perspective comes by allowing people to "remain as they are," asking instead how interfaces, devices, and environments can bear the burden of becoming more suitable to their users' situated abilities. I call this perspective and the principles that accompany it "Ability-Based Design" [4,5], where the human abilities required to use a technology in a given context are questioned, and systems are made operable by or adaptable to alternative abilities. From this perspective, all people have varying degrees of ability, and different situations lead to different ability limitations, some long-term and some momentary. Some ability limitations come mostly from within the self, others from mostly outside the self. Ability-Based Design considers this whole "landscape of ability," respecting the human at its center and asking more of our technologies. In this talk, I will cover a decade's worth of projects related to Ability-Based Design, some directed at "people with disabilities" and others directed at "people in disabling situations." Rather than dive into any one project, I will convey a space of explored possibilities. I will also put forth a grand challenge: that anyone, anywhere, at any time can interact with technologies ideally suited to their specific situated abilities, and that our technologies do the work to achieve this fit.
- Published
- 2017
- Full Text
- View/download PDF
26. Improving Dwell-Based Gaze Typing with Dynamic, Cascading Dwell Times
- Author
-
Meredith Ringel Morris, Martez E. Mott, Shane Williams, and Jacob O. Wobbrock
- Subjects
InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,05 social sciences ,Real-time computing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Dwell time ,InformationSystems_MODELSANDPRINCIPLES ,Gaze typing ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Cadence ,human activities ,050107 human factors ,Simulation - Abstract
We present cascading dwell gaze typing, a novel approach to dwell-based eye typing that dynamically adjusts the dwell time of keys in an on-screen keyboard based on the likelihood that a key will be selected next, and the location of the key on the keyboard. Our approach makes unlikely keys more difficult to select and likely keys easier to select by increasing and decreasing their required dwell times, respectively. To maintain a smooth typing rhythm for the user, we cascade the dwell time of likely keys, slowly decreasing the minimum allowable dwell time as a user enters text. Cascading the dwell time affords users the benefits of faster dwell times while causing little disruption to users' typing cadence. Results from a longitudinal study with 17 non-disabled participants show that our dynamic cascading dwell technique was significantly faster than a static dwell approach. Participants were able to achieve typing speeds of 12.39 WPM on average with our cascading technique, whereas participants were able to achieve typing speeds of 10.62 WPM on average with a static dwell time approach. In a small evaluation conducted with five people with ALS, participants achieved average typing speeds of 9.51 WPM with our cascading dwell approach. These results show that our dynamic cascading dwell technique has the potential to improve gaze typing for users with and without disabilities.
- Published
- 2017
- Full Text
- View/download PDF
27. Group Touch
- Author
-
Katie Davis, James Fogarty, Jacob O. Wobbrock, and Abigail Evans
- Subjects
Multimedia ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Group (mathematics) ,Computer science ,Orientation (computer vision) ,Field data ,05 social sciences ,020207 software engineering ,Statistical model ,02 engineering and technology ,computer.software_genre ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Instrumentation (computer programming) ,computer ,050107 human factors - Abstract
We present Group Touch, a method for distinguishing among multiple users simultaneously interacting with a tabletop computer using only the touch information supplied by the device. Rather than tracking individual users for the duration of an activity, Group Touch distinguishes users from each other by modeling whether an interaction with the tabletop corresponds to either: (1) a new user, or (2) a change in users currently interacting with the tabletop. This reframing of the challenge as distinguishing users rather than tracking and identifying them allows Group Touch to support multi-user collaboration in real-world settings without custom instrumentation. Specifically, Group Touch examines pairs of touches and uses the difference in orientation, distance, and time between two touches to determine whether the same person performed both touches in the pair. Validated with field data from high-school students in a classroom setting, Group Touch distinguishes among users "in the wild" with a mean accuracy of 92.92% (SD=3.94%). Group Touch can imbue collaborative touch applications in real-world settings with the ability to distinguish among multiple users.
- Published
- 2017
- Full Text
- View/download PDF
28. Interaction Proxies for Runtime Repair and Enhancement of Mobile Application Accessibility
- Author
-
Jacob O. Wobbrock, Xiaoyi Zhang, Anat Caspi, Anne Spencer Ross, and James Fogarty
- Subjects
Screen reader ,Source code ,Computer science ,media_common.quotation_subject ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Software deployment ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Android (operating system) ,Implementation ,050107 human factors ,media_common - Abstract
We introduce interaction proxies as a strategy for runtime repair and enhancement of the accessibility of mobile applications. Conceptually, interaction proxies are inserted between an application's original interface and the manifest interface that a person uses to perceive and manipulate the application. This strategy allows third-party developers and researchers to modify an interaction without an application's source code, without rooting the phone, without otherwise modifying an application, while retaining all capabilities of the system (e.g., Android's full implementation of the TalkBack screen reader). This paper introduces interaction proxies, defines a design space of interaction re-mappings, identifies necessary implementation abstractions, presents details of implementing those abstractions in Android, and demonstrates a set of Android implementations of interaction proxies from throughout our design space. We then present a set of interviews with blind and low-vision people interacting with our prototype interaction proxies, using these interviews to explore the seamlessness of interaction, the perceived usefulness and potential of interaction proxies, and visions of how such enhancements could gain broad usage. By allowing third-party developers and researchers to improve an interaction, interaction proxies offer a new approach to personalizing mobile application accessibility and a new approach to catalyzing development, deployment, and evaluation of mobile accessibility enhancements.
- Published
- 2017
- Full Text
- View/download PDF
29. How Designing for People With and Without Disabilities Shapes Student Design Thinking
- Author
-
Cynthia L. Bennett, Kristen Shinohara, and Jacob O. Wobbrock
- Subjects
Multimedia ,05 social sciences ,020207 software engineering ,Design thinking ,02 engineering and technology ,computer.software_genre ,Assistive technology ,0202 electrical engineering, electronic engineering, information engineering ,ComputingMilieux_COMPUTERSANDSOCIETY ,Mainstream ,0501 psychology and cognitive sciences ,Engineering ethics ,Engineering design process ,Psychology ,computer ,050107 human factors - Abstract
Despite practices addressing disability in design and advocating user-centered design (UCD) approaches, popular mainstream technologies remain largely inaccessible for people with disabilities. We conducted a design course study investigating how student designers regard disability and explored how designing for both disabled and non-disabled users encouraged students to think about accessibility throughout the design process. Students focused on a design project while learning UCD concepts and techniques, working with people with and without disabilities throughout the project. We found that designing for both disabled and non-disabled users surfaced challenges and tensions in finding solutions to satisfy both groups, influencing students' attitudes toward accessible design. In addressing these tensions, non-functional aspects of accessible design emerged as important complements to functional aspects for users with and without disabilities.
- Published
- 2016
- Full Text
- View/download PDF
30. Gestures by Children and Adults on Touch Tables and Touch Walls in a Public Science Center
- Author
-
Lisa Anthony, Kathryn A. Stofer, Annie Luc, and Jacob O. Wobbrock
- Subjects
Multimedia ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Public displays ,computer.software_genre ,law.invention ,Child computer interaction ,Touchscreen ,Human–computer interaction ,law ,0202 electrical engineering, electronic engineering, information engineering ,Table (database) ,0501 psychology and cognitive sciences ,Center (algebra and category theory) ,computer ,050107 human factors ,Gesture - Abstract
Research on children's interactions with touchscreen devices has examined small and large screens and compared interaction to adults or among children of different ages. Little work has explicitly compared interaction on different platforms, however. Large touchscreen displays can be deployed flat, as in a table, or vertically, as on a wall. While these two form factors have been studied, it is not known what differences may exist between them. We present a study of visitors to a science museum, including children and their parents, who interacted with Google Earth on either a touch table or a touch wall. We compare the types of gestures and interactions attempted on each device and find several interesting results, including: users of all ages tend to make standard touchscreen gestures on both platforms, but children were more likely than adults to try new gestures. Users were more likely to perform two-handed, multi-touch gestures on the touch wall than on the touch table. Our findings will inform the design of future interactive applications for each platform.
- Published
- 2016
- Full Text
- View/download PDF
31. Smart Touch
- Author
-
Jacob O. Wobbrock, Shaun K. Kane, Radu-Daniel Vatavu, and Martez E. Mott
- Subjects
Point (typography) ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,business.industry ,Computer science ,Template matching ,05 social sciences ,020207 software engineering ,02 engineering and technology ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Computer vision ,Artificial intelligence ,business ,050107 human factors - Abstract
We present two contributions toward improving the accessibility of touch screens for people with motor impairments. First, we provide an exploration of the touch behaviors of 10 people with motor impairments, e.g., we describe how touching with the back or sides of the hand, with multiple fingers, or with knuckles creates varied multi-point touches. Second, we introduce Smart Touch, a novel template-matching technique for touch input that maps any number of arbitrary contact-areas to a user's intended (x,y) target location. The result is that users with motor impairments can touch however their abilities allow, and Smart Touch will resolve their intended touch point. Smart Touch therefore allows users to touch targets in whichever ways are most comfortable and natural for them. In an experimental evaluation, we found that Smart Touch predicted (x,y) coordinates of the users' intended target locations over three times closer to the intended target than the native Land-on and Lift-off techniques reported by the built-in touch sensors found in the Microsoft PixelSense interactive tabletop. This result is an important step toward improving touch accuracy for people with motor impairments and others for whom touch screen operation was previously impossible.
- Published
- 2016
- Full Text
- View/download PDF
32. Modeling Collaboration Patterns on an Interactive Tabletop in a Classroom Setting
- Author
-
Jacob O. Wobbrock, Abigail Evans, and Katie Davis
- Subjects
Multimedia ,Computer science ,Process (engineering) ,media_common.quotation_subject ,05 social sciences ,050301 education ,Collaborative learning ,computer.software_genre ,Field (computer science) ,Human–computer interaction ,Table (database) ,0501 psychology and cognitive sciences ,Quality (business) ,0503 education ,computer ,050107 human factors ,Educational software ,media_common - Abstract
Interaction logs generated by educational software can provide valuable insights into the collaborative learning process and identify opportunities for technology to provide adaptive assistance. Modeling collaborative learning processes at tabletop computers is challenging, as the computer is only able to log a portion of the collaboration, namely the touch events on the table. Our previous lab study with adults showed that patterns in a group's touch interactions with a tabletop computer can reveal the quality of aspects of their collaborative process. We extend this understanding of the relationship between touch interactions and the collaborative process to adolescent learners in a field setting and demonstrate that the touch patterns reflect the quality of collaboration more broadly than previously thought, with accuracies up to 84.2%. We also present an approach to using the touch patterns to model the quality of collaboration in real-time.
- Published
- 2016
- Full Text
- View/download PDF
33. Nonparametric Statistics in Human–Computer Interaction
- Author
-
Matthew W. Kay and Jacob O. Wobbrock
- Subjects
Statistics::Theory ,Computer science ,05 social sciences ,Nonparametric statistics ,020207 software engineering ,Binomial test ,02 engineering and technology ,Interaction studies ,Statistical analyses ,Statistics ,0202 electrical engineering, electronic engineering, information engineering ,Statistics::Methodology ,0501 psychology and cognitive sciences ,Ordered logit ,050107 human factors ,Parametric statistics ,Multinomial logistic regression - Abstract
Data not suitable for classic parametric statistical analyses arise frequently in human–computer interaction studies. Various nonparametric statistical procedures are appropriate and advantageous when used properly. This chapter organizes and illustrates multiple nonparametric procedures, contrasting them with their parametric counterparts. Guidance is given for when to use nonparametric analyses and how to interpret and report their results.
- Published
- 2016
- Full Text
- View/download PDF
34. Comparing Speech and Keyboard Text Entry for Short Messages in Two Languages on Touchscreen Phones
- Author
-
Andrew Y. Ng, Sherry Ruan, James A. Landay, Jacob O. Wobbrock, and Kenny Liou
- Subjects
FOS: Computer and information sciences ,Computer Networks and Communications ,Computer science ,Speech recognition ,Computer Science - Human-Computer Interaction ,Word error rate ,02 engineering and technology ,Mandarin Chinese ,law.invention ,Human-Computer Interaction (cs.HC) ,H.5.2 ,Touchscreen ,law ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Text entry ,050107 human factors ,Pace ,business.industry ,Deep learning ,05 social sciences ,Pinyin ,language.human_language ,Human-Computer Interaction ,Hardware and Architecture ,language ,Artificial intelligence ,Transcription (software) ,business - Abstract
With the ubiquity of mobile touchscreen devices like smartphones, two widely used text entry methods have emerged: small touch-based keyboards and speech recognition. Although speech recognition has been available on desktop computers for years, it has continued to improve at a rapid pace, and it is currently unknown how today's modern speech recognizers compare to state-of-the-art mobile touch keyboards, which also have improved considerably since their inception. To discover both methods' "upper-bound performance," we evaluated them in English and Mandarin Chinese on an Apple iPhone 6 Plus in a laboratory setting. Our experiment was carried out using Baidu's Deep Speech 2, a deep learning-based speech recognition system, and the built-in Qwerty (English) or Pinyin (Mandarin) Apple iOS keyboards. We found that with speech recognition, the English input rate was 2.93 times faster (153 vs. 52 WPM), and the Mandarin Chinese input rate was 2.87 times faster (123 vs. 43 WPM) than the keyboard for short message transcription under laboratory conditions for both methods. Furthermore, although speech made fewer errors during entry (5.30% vs. 11.22% corrected error rate), it left slightly more errors in the final transcribed text (1.30% vs. 0.79% uncorrected error rate). Our results show that comparatively, under ideal conditions for both methods, upper-bound speech recognition performance has greatly improved compared to prior systems, and might see greater uptake in the future, although further study is required to quantify performance in non-laboratory settings for both methods., Comment: 23 pages
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.