68 results on '"Cecilia Ovesdotter Alm"'
Search Results
2. Understanding Differences in Human-Robot Teaming Dynamics between Deaf/Hard of Hearing and Hearing Individuals
- Author
-
A'di Dust, Carola Gonzalez-Lebron, Shannon Connell, Saurav Singh, Reynold Bailey, Cecilia Ovesdotter Alm, and Jamison Heard
- Published
- 2023
- Full Text
- View/download PDF
3. Modeling eye movement patterns to characterize perceptual skill in image-based diagnostic reasoning processes
- Author
-
Pengcheng Shi, Rui Li, Cecilia Ovesdotter Alm, Jeff B. Pelz, and Anne R. Haake
- Subjects
genetic structures ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Machine learning ,computer.software_genre ,050105 experimental psychology ,Article ,Perceptual learning ,Perception ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Narrative ,Hidden Markov model ,media_common ,business.industry ,05 social sciences ,Perspective (graphical) ,Eye movement ,Cognition ,Categorization ,Signal Processing ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Psychology ,computer ,Software ,Cognitive psychology - Abstract
Experts have a remarkable capability of locating, perceptually organizing, identifying, and categorizing objects in images specific to their domains of expertise. In this article, we present a hierarchical probabilistic framework to discover the stereotypical and idiosyncratic viewing behaviors exhibited with expertise-specific groups. Through these patterned eye movement behaviors we are able to elicit the domain-specific knowledge and perceptual skills from the subjects whose eye movements are recorded during diagnostic reasoning processes on medical images. Analyzing experts' eye movement patterns provides us insight into cognitive strategies exploited to solve complex perceptual reasoning tasks. An experiment was conducted to collect both eye movement and verbal narrative data from three groups of subjects with different levels or no medical training (eleven board-certified dermatologists, four dermatologists in training and thirteen undergraduates) while they were examining and describing 50 photographic dermatological images. We use a hidden Markov model to describe each subject's eye movement sequence combined with hierarchical stochastic processes to capture and differentiate the discovered eye movement patterns shared by multiple subjects within and among the three groups. Independent experts' annotations of diagnostic conceptual units of thought in the transcribed verbal narratives are time-aligned with discovered eye movement patterns to help interpret the patterns' meanings. By mapping eye movement patterns to thought units, we uncover the relationships between visual and linguistic elements of their reasoning and perceptual processes, and show the manner in which these subjects varied their behaviors while parsing the images. We also show that inferred eye movement patterns characterize groups of similar temporal and spatial properties, and specify a subset of distinctive eye movement patterns which are commonly exhibited across multiple images. Based on the combinations of the occurrences of these eye movement patterns, we are able to categorize the images from the perspective of experts' viewing strategies in a novel way. In each category, images share similar lesion distributions and configurations. Our results show that modeling with multi-modal data, representative of physicians' diagnostic viewing behaviors and thought processes, is feasible and informative to gain insights into physicians' cognitive strategies, as well as medical image understanding.
- Published
- 2022
4. Perceptions of Human and Machine-Generated Articles
- Author
-
Ammina Kothari, Reynold Bailey, Renos Zabounidis, Cecilia Ovesdotter Alm, and Shubhra Tewari
- Subjects
0303 health sciences ,Facial expression ,Political spectrum ,Computer Networks and Communications ,media_common.quotation_subject ,05 social sciences ,050801 communication & media studies ,Tone (literature) ,Computer Science Applications ,03 medical and health sciences ,Politics ,0508 media and communications ,Hardware and Architecture ,Perception ,Credibility ,Journalism ,Skin conductance ,Psychology ,Safety Research ,Social psychology ,Software ,030304 developmental biology ,Information Systems ,media_common - Abstract
Automated journalism technology is transforming news production and changing how audiences perceive the news. As automated text-generation models advance, it is important to understand how readers perceive human-written and machine-generated content. This study used OpenAI’s GPT-2 text-generation model (May 2019 release) and articles from news organizations across the political spectrum to study participants’ reactions to human- and machine-generated articles. As participants read the articles, we collected their facial expression and galvanic skin response (GSR) data together with self-reported perceptions of article source and content credibility. We also asked participants to identify their political affinity and assess the articles’ political tone to gain insight into the relationship between political leaning and article perception. Our results indicate that the May 2019 release of OpenAI’s GPT-2 model generated articles that were misidentified as written by a human close to half the time, while human-written articles were identified correctly as written by a human about 70 percent of the time.
- Published
- 2021
- Full Text
- View/download PDF
5. Transitioning from Teaching to Mentoring: Supporting Students to Adopt Mentee Roles
- Author
-
Reynold Bailey and Cecilia Ovesdotter Alm
- Subjects
Medical education ,4. Education ,media_common.quotation_subject ,05 social sciences ,Professional development ,050301 education ,Context (language use) ,Science education ,050105 experimental psychology ,Undergraduate research ,Coursework ,Underrepresented Minority ,Premise ,ComputingMilieux_COMPUTERSANDEDUCATION ,0501 psychology and cognitive sciences ,Psychology ,0503 education ,Autonomy ,media_common - Abstract
Mentoring is an essential aspect of scientific research training across multiple academic career levels. In student research training contexts, mentoring is intertwined with expectations of increased autonomy and development of research confidence and independence. An often assumed premise of student research mentoring is that students are cognizant of the expectations of mentee-mentor interactions in research and that they are prepared to step into their new mentee role. This assumption is problematic in undergraduate research training where students may conceptualize their interactions with faculty based on student-teacher interactions they have become familiar with in their coursework. There is also indication that the present lack of structured training elements to help students navigate changing roles may especially impact underrepresented minorities in STEM education. Our contributions include reporting on the design and lessons learned from implementing a Teaching-to-Mentoring Framework comprising six professional development strategies in the context of a cohort-based research experience program. The framework aims to support this transitioning into adopting a new role, making the distinction between mentee-mentor and student-teacher interactions transparent, and enabling students to make the most of their undergraduate research experiences.
- Published
- 2020
- Full Text
- View/download PDF
6. REU Mentoring Engagement
- Author
-
Reynold Bailey and Cecilia Ovesdotter Alm
- Subjects
Medical education ,ComputingMilieux_THECOMPUTINGPROFESSION ,Computer science ,4. Education ,media_common.quotation_subject ,05 social sciences ,050301 education ,Organizational culture ,Internal communications ,02 engineering and technology ,ComputingMilieux_GENERAL ,Undergraduate research ,Work (electrical) ,Covert ,020204 information systems ,Perception ,ComputingMilieux_COMPUTERSANDEDUCATION ,0202 electrical engineering, electronic engineering, information engineering ,0503 education ,Productivity ,media_common - Abstract
To examine perceptions of faculty mentors of undergraduate research and their supervisors, this work discusses the results of surveys administered after 3 years of a summer CS-focused REU Site program. One survey was completed by administrators of faculty research mentors--deans and chairs--and the other was completed by faculty mentors. The surveys indicated a disconnect between how the groups assessed undergraduate research mentoring as an indicator of faculty productivity, and overt vs. covert recognition of undergraduate mentoring. Additional topics explored the effectiveness of internal communication of program outcomes and ways to improve it, as well as post-program continued mentoring engagement linking to perceptions of long-term student benefits.
- Published
- 2021
- Full Text
- View/download PDF
7. Handling Extreme Class Imbalance in Technical Logbook Datasets
- Author
-
Cecilia Ovesdotter Alm, Travis Desell, Marcos Zampieri, and Farhad Akhbardeh
- Subjects
Class (computer programming) ,Artificial neural network ,Process (engineering) ,Event (computing) ,business.industry ,Computer science ,Suite ,Automotive industry ,Machine learning ,computer.software_genre ,Identification (information) ,Artificial intelligence ,business ,computer ,Logbook - Abstract
Technical logbooks are a challenging and under-explored text type in automated event identification. These texts are typically short and written in non-standard yet technical language, posing challenges to off-the-shelf NLP pipelines. The granularity of issue types described in these datasets additionally leads to class imbalance, making it challenging for models to accurately predict which issue each logbook entry describes. In this paper we focus on the problem of technical issue classification by considering logbook datasets from the automotive, aviation, and facilities maintenance domains. We adapt a feedback strategy from computer vision for handling extreme class imbalance, which resamples the training data based on its error in the prediction process. Our experiments show that with statistical significance this feedback strategy provides the best results for four different neural network models trained across a suite of seven different technical logbook datasets from distinct technical domains. The feedback strategy is also generic and could be applied to any learning problem with substantial class imbalances.
- Published
- 2021
- Full Text
- View/download PDF
8. Invisible AI-driven HCI Systems – When, Why and How
- Author
-
Antonios Liapis, Thomas Pederson, Alberto Alvarez, Jose M. Font, Johan Salo, and Cecilia Ovesdotter Alm
- Subjects
Class (computer programming) ,Focus (computing) ,Computer science ,05 social sciences ,Public debate ,020207 software engineering ,02 engineering and technology ,Interaction design ,Ontology (information science) ,Data science ,Bridging (programming) ,0202 electrical engineering, electronic engineering, information engineering ,Systems design ,0501 psychology and cognitive sciences ,Affordance ,050107 human factors - Abstract
The InvisibleAI (InvAI’20) workshop aims to systematically discuss a growing class of interactive systems that invisibly remove some decision-making tasks away from humans to machines, based on recent advances in artificial intelligence (AI), data science, and sensor or actuation technology. While the interest in the affordances as well as the risks of hidden pervasive AI are high on the agenda in public debate, discussion on the topic is needed within the human-computer interaction (HCI) community. In particular, we want to gather insights, ideas, and models for approaching the use of barely noticeable AI decision-making in systems design from a human-centered perspective, so as to make the most out of the automated systems and algorithms that support human activity both as designers and users. Concurrently, these systems should safeguard that humans remain in charge when it counts (high stakes decisions, privacy, monitoring lack of explainability and fairness, etc.). What to automate and what not to automate is often a system designer’s choice [8]. By taking the established concept of explicit interaction between a system and its user as a point of departure, and inviting authors to provide examples from their own research, we aim to stimulate dynamic discussion while keeping the workshop concrete and system design-focused. The workshop especially directs itself to participants from the interaction design, AI, and HCI communities. The targeted scientific outcome of the workshop is an up-to-date ontology of invisible AI-HCI systems and hybrid human-AI collaboration mechanisms, and approaches. Additionally, we expect that the workgroups and the roundtables will provide starting points shaping continued discussions, new collaborations, and innovative scientific contributions that springboard from the workgroups’ findings. The focus of the proposed workshop involves the bridging of two spaces of computational research that impact user experiences and societal domains (HCI and AI). Thus, the proposed workshop topic aligns well with the theme of this year’s NordiCHI conference which is Shaping Experiences, Shaping Society.
- Published
- 2020
- Full Text
- View/download PDF
9. Computational framework for fusing eye movements and spoken narratives for image annotation
- Author
-
Cecilia Ovesdotter Alm, Jeff B. Pelz, Emily Prud'hommeaux, and Preethi Vaidyanathan
- Subjects
Adult ,Male ,Machine translation ,Adolescent ,Databases, Factual ,Eye Movements ,Computer science ,multimodal fusion ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,computer.software_genre ,gaze ,050105 experimental psychology ,Field (computer science) ,Article ,computer vision ,image annotation ,Domain (software engineering) ,Image (mathematics) ,machine translation ,bitext alignment ,03 medical and health sciences ,Young Adult ,0302 clinical medicine ,Humans ,0501 psychology and cognitive sciences ,spoken descriptions ,Cluster analysis ,Data Curation ,business.industry ,05 social sciences ,Gaze ,Sensory Systems ,Semantics ,Ophthalmology ,Automatic image annotation ,Speech Perception ,Female ,Artificial intelligence ,Neural Networks, Computer ,business ,computer ,030217 neurology & neurosurgery ,Natural language processing ,Spoken language - Abstract
Despite many recent advances in the field of computer vision, there remains a disconnect between how computers process images and how humans understand them. To begin to bridge this gap, we propose a framework that integrates human-elicited gaze and spoken language to label perceptually important regions in an image. Our work relies on the notion that gaze and spoken narratives can jointly model how humans inspect and analyze images. Using an unsupervised bitext alignment algorithm originally developed for machine translation, we create meaningful mappings between participants' eye movements over an image and their spoken descriptions of that image. The resulting multimodal alignments are then used to annotate image regions with linguistic labels. The accuracy of these labels exceeds that of baseline alignments obtained using purely temporal correspondence between fixations and words. We also find differences in system performances when identifying image regions using clustering methods that rely on gaze information rather than image features. The alignments produced by our framework can be used to create a database of low-level image features and high-level semantic annotations corresponding to perceptually important image regions. The framework can potentially be applied to any multimodal data stream and to any visual domain. To this end, we provide the research community with access to the computational framework.
- Published
- 2020
10. Gaze-guided Magnification for Individuals with Vision Impairments
- Author
-
Cecilia Ovesdotter Alm, Reynold Bailey, Natalie Maus, Dalton Rutledge, Kristen Shinohara, and Sedeeq Al-khazraji
- Subjects
Computer science ,business.industry ,media_common.quotation_subject ,05 social sciences ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Magnification ,020207 software engineering ,02 engineering and technology ,Tracking (particle physics) ,Gaze ,Low vision ,InformationSystems_MODELSANDPRINCIPLES ,Reading (process) ,0202 electrical engineering, electronic engineering, information engineering ,Eye tracking ,0501 psychology and cognitive sciences ,Computer vision ,Artificial intelligence ,business ,050107 human factors ,media_common - Abstract
Video-based eye trackers increasingly have potential to improve on-screen magnification for low-vision computer users. Yet, little is known about the viability of eye tracking hardware for gaze-guided magnification. We employed a magnification prototype to assess eye tracking quality for low-vision users as they performed reading and search tasks. We show that a high degree of tracking loss prevents current video-based eye tracking from capturing gaze input for low-vision users. Our findings show current technologies were not made with low vision users in mind, and we offer suggestions to improve gaze-tracking for diverse eye input.
- Published
- 2020
- Full Text
- View/download PDF
11. Capturing Laughter and Smiles under Genuine Amusement vs. Negative Emotion
- Author
-
Cleo Forman, Cecilia Ovesdotter Alm, Pablo Thiel, Miguel Dominguez, and Raymond Ptucha
- Subjects
Computer science ,media_common.quotation_subject ,05 social sciences ,Emotional valence ,Laughter ,03 medical and health sciences ,Amusement ,0302 clinical medicine ,Task analysis ,0501 psychology and cognitive sciences ,Affect (linguistics) ,Transfer of learning ,Negative emotion ,050107 human factors ,030217 neurology & neurosurgery ,media_common ,Cognitive psychology - Abstract
Smiling and laughter are typically associated with amusement. If they occur under negative emotions, systems responding naively may confuse an uncomfortable smile or laugh with an amused state. We present a passive text and video elicitation task and collect spontaneous laughter and smiles in reaction to amusing and negative experiences, using standard, ubiquitous sensors (webcam and microphone), along with participant self-ratings. While we rely on a state-of-the-art smile recognizer, for laughter recognition our transfer learning architecture enhanced on modest data outperforms other models with up to 85% accuracy (F1 = 0.86), suggesting this technique as promising for improving affect models. Subsequently, we analyze and automatically predict laughter as amused vs. negative. However, contrasting with prior findings for acted data, for this spontaneously elicited dataset classifying laughter by emotional valence is not satisfactory.
- Published
- 2020
- Full Text
- View/download PDF
12. Dynamic Visualization System for Gaze and Dialogue Data
- Author
-
Cecilia Ovesdotter Alm, Jonathan Kvist, Philip Ekholm, Preethi Vaidyanathan, and Reynold Bailey
- Subjects
Computer science ,Dynamic visualization ,Human–computer interaction ,Gaze ,Språkteknologi (språkvetenskaplig databehandling) ,Language Technology (Computational Linguistics) - Abstract
We report and review a visualization system capable of displaying gaze and speech data elicited from pairs of subjects interacting in a discussion. We elicit such conversation data in our first experiment, where two participants are given the task of reaching a consensus about questions involving images. We validate the system in a second experiment where the purpose is to see if a person could determine which question had elicited a certain visualization. The visualization system allows users to explore reasoning behavior and participation during multimodal dialogue interactions.
- Published
- 2020
13. Fusing Dialogue and Gaze From Discussions of 2D and 3D Scenes
- Author
-
Reynold Bailey, Bradley J. S. C. Olson, Preethi Vaidyanathan, Cecilia Ovesdotter Alm, and Regina Wang
- Subjects
Modalities ,Machine translation ,Computer science ,BitTorrent tracker ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Inference ,Wearable computer ,computer.software_genre ,Gaze ,Human–computer interaction ,Conversation ,computer ,media_common ,Meaning (linguistics) - Abstract
Conversation partners rely on inference using each other’s gaze and utterances to negotiate shared meaning. In contrast, dialogue systems still operate mostly with unimodal question or command and response interactions. To realize systems that can intuitively discuss and collaborate with humans, we should consider other sensory information. We begin to address this limitation with an innovative study that acquires, analyzes, and fuses interlocutors’ discussion and gaze. Introducing a discussion-based elicitation task, we collect gaze with remote and wearable eye trackers alongside dialogue as interlocutors come to consensus on questions about an on-screen 2D image and a real-world 3D scene. We analyze the visual-linguistic patterns, and also map the modalities onto the visual environment by extending a multimodal image region annotation framework using statistical machine translation for multimodal fusion, applying three ways of fusing speakers’ gaze and discussion.
- Published
- 2019
- Full Text
- View/download PDF
14. Multimodal Anticipated versus Actual Perceptual Reactions
- Author
-
Cecilia Ovesdotter Alm, Tyrell Roberts, Raymond Ptucha, Christopher M. Homan, and Monali Saraf
- Subjects
Laughter ,Amusement ,Facial expression ,Perception ,media_common.quotation_subject ,Skin response ,Psychology ,Cognitive psychology ,media_common - Abstract
We introduce an experimental method where subjects watch and rate humorous versus neutral videos while their reactions are collected in three modes: non-linguistic verbalizations (laughter), facial expressions, and skin response. We use unimodal analysis and predictive modeling to examine the relationship between the reactions and the perceptions anticipated by experimenters versus the subjects’ reported actual ones. We find expected associations for facial expressions and amusement, but not skin response. Laughter, while relatively infrequent, strongly indicates amusement when it occurs. Our method can apply generally for comparing anticipated versus actual responses when collecting data for learning affective human response.
- Published
- 2019
- Full Text
- View/download PDF
15. Affective Video Recommender System
- Author
-
Reynold Bailey, Cecilia Ovesdotter Alm, and Yashowardhan Soni
- Subjects
Facial expression ,Human–computer interaction ,Computer science ,Face (geometry) ,Pulse (music) ,Recommender system ,Media content ,Task (project management) - Abstract
Video recommendation is the task of providing users with customized media content conventionally done by considering historical user ratings. We develop classifiers that learn from human faces toward a video recommender system that utilizes displayed emotional reactions to previously seen videos for predicting preferences. We use a dataset collected from subjects who watched videos selected to elicit different emotions, to model two related problems: (1) prediction of user rating and (2) whether a user would recommend a particular video. The classifiers are trained on two forms of face-based features: facial expressions and skin-estimated pulse. In addition, the impact of data augmentation and instance size are studied.
- Published
- 2019
- Full Text
- View/download PDF
16. Understanding Human and Predictive Moderation of Online Science Discourse
- Author
-
Elizabeth Lucas, Cecilia Ovesdotter Alm, and Reynold Bailey
- Subjects
education ,Applied psychology ,Psychology ,Moderation ,humanities ,Task (project management) - Abstract
Manual moderation activities can be fatiguing, emotionally exhausting, and potentially traumatizing, yet moderation is essential to the health of the discussion community. Communities, therefore, can benefit from automated moderation systems. We report on a study with a survey about moderation behaviors and an annotation task involving forum comments to aid curating a deeper understanding of moderation towards predictive support. We also create models for distinguishing between acceptable and unacceptable scientific forum comments and discuss results given moderators responses.
- Published
- 2019
- Full Text
- View/download PDF
17. Quantitative Methods for Analyzing Intimate Partner Violence in Microblogs: Observational Study
- Author
-
Cecilia Ovesdotter Alm, Catherine Cerulli, J. Nicolas Schrading, Raymond Ptucha, and Christopher M. Homan
- Subjects
Male ,Microblogging ,intimate partner violence ,social media ,Abusive relationship ,Health Informatics ,02 engineering and technology ,lcsh:Computer applications to medicine. Medical informatics ,Machine Learning ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Social media ,Relevance (information retrieval) ,030212 general & internal medicine ,natural language processing ,music ,Original Paper ,music.instrument ,lcsh:Public aspects of medicine ,lcsh:RA1-1270 ,Data science ,Support vector machine ,Internet Use ,Scale (social sciences) ,Domestic violence ,lcsh:R858-859.7 ,020201 artificial intelligence & image processing ,Observational study ,Female ,Psychology - Abstract
Background Social media is a rich, virtually untapped source of data on the dynamics of intimate partner violence, one that is both global in scale and intimate in detail. Objective The aim of this study is to use machine learning and other computational methods to analyze social media data for the reasons victims give for staying in or leaving abusive relationships. Methods Human annotation, part-of-speech tagging, and machine learning predictive models, including support vector machines, were used on a Twitter data set of 8767 #WhyIStayed and #WhyILeft tweets each. Results Our methods explored whether we can analyze micronarratives that include details about victims, abusers, and other stakeholders, the actions that constitute abuse, and how the stakeholders respond. Conclusions Our findings are consistent across various machine learning methods, which correspond to observations in the clinical literature, and affirm the relevance of natural language processing and machine learning for exploring issues of societal importance in social media.
- Published
- 2019
18. Intelligent medical image grouping through interactive learning
- Author
-
Cecilia Ovesdotter Alm, Xuan Guo, Rui Li, Pengcheng Shi, Cara Calvelli, Qi Yu, and Anne R. Haake
- Subjects
Visual analytics ,Pixel ,business.industry ,Interface (Java) ,Computer science ,Applied Mathematics ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,020207 software engineering ,02 engineering and technology ,Machine learning ,computer.software_genre ,Computer Science Applications ,Interactive Learning ,Visualization ,Image (mathematics) ,Computational Theory and Mathematics ,Modeling and Simulation ,0202 electrical engineering, electronic engineering, information engineering ,Domain knowledge ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Information Systems - Abstract
Image grouping in knowledge-rich domains is challenging, since domain knowledge and human expertise are key to transform image pixels into meaningful content. Manually marking and annotating images is not only labor-intensive but also ineffective. Furthermore, most traditional machine learning approaches cannot bridge this gap for the absence of experts’ input. We thus present an interactive machine learning paradigm that allows experts to become an integral part of the learning process. This paradigm is designed for automatically computing and quantifying interpretable grouping of dermatological images. In this way, the computational evolution of an image grouping model, its visualization, and expert interactions form a loop to improve image grouping. In our paradigm, dermatologists encode their domain knowledge about the medical images by grouping a small subset of images via a carefully designed interface. Our learning algorithm automatically incorporates these manually specified connections as constraints for reorganizing the whole image dataset. Performance evaluation shows that this paradigm effectively improves image grouping based on expert knowledge.
- Published
- 2016
- Full Text
- View/download PDF
19. A Pedagogical Model for Computational Linguistics across Curricular Boundaries
- Author
-
Kathryn Womack, Timothy H. Engström, Cecilia Ovesdotter Alm, and Anne R. Haake
- Subjects
060201 languages & linguistics ,Linguistics and Language ,Computer science ,0602 languages and literature ,05 social sciences ,0501 psychology and cognitive sciences ,06 humanities and the arts ,Computational linguistics ,050105 experimental psychology ,Linguistics - Published
- 2016
- Full Text
- View/download PDF
20. Language as Sensor in Human-Centered Computing: Clinical Contexts as Use Cases
- Author
-
Cecilia Ovesdotter Alm
- Subjects
Linguistics and Language ,Computer science ,business.industry ,computer.software_genre ,Human-centered computing ,Linguistics ,030507 speech-language pathology & audiology ,03 medical and health sciences ,0302 clinical medicine ,Human–computer interaction ,Use case ,Artificial intelligence ,0305 other medical science ,business ,computer ,030217 neurology & neurosurgery ,Natural language processing - Published
- 2016
- Full Text
- View/download PDF
21. Considerations for Face-based Data Estimates: Affect Reactions to Videos
- Author
-
Kristoffer Linderman, Reynold Bailey, Cecilia Ovesdotter Alm, and Gustaf Bohlin
- Subjects
Computer science ,Human–computer interaction ,Face (geometry) ,Set (psychology) ,Affect (psychology) - Abstract
Video streaming is becoming the new standard for watching videos, providing an opportunity for affective video recommendation that leverages noninvasive sensing data from viewers to suggest content. Face-based data has the distinct advantage that it can be collected noninvasively with minimal equipment such as a simple webcam. Face recordings can be used for estimating individuals’ emotional states based on their facial movements and also for estimating pulse as a signal for emotional reactions. We provide a focused case-based contribution by reporting on methodological challenges experienced in a research study with face-based data estimates which are then used in predicting affective reactions. We build on lessons learned to formulate a set of recommendations that can be useful for continued work towards affective video recommendation.
- Published
- 2019
- Full Text
- View/download PDF
22. Modeling Acoustic-Prosodic Cues for Word Importance Prediction in Spoken Dialogues
- Author
-
Matt Huenerfauth, Cecilia Ovesdotter Alm, and Sushant Kafle
- Subjects
Conversational speech ,FOS: Computer and information sciences ,Computer Science - Computation and Language ,Computer science ,media_common.quotation_subject ,Speech recognition ,Word error rate ,Metric (mathematics) ,Conversation ,Imperfect ,Computation and Language (cs.CL) ,Word (computer architecture) ,media_common ,Meaning (linguistics) - Abstract
Prosodic cues in conversational speech aid listeners in discerning a message. We investigate whether acoustic cues in spoken dialogue can be used to identify the importance of individual words to the meaning of a conversation turn. Individuals who are Deaf and Hard of Hearing often rely on real-time captions in live meetings. Word error rate, a traditional metric for evaluating automatic speech recognition, fails to capture that some words are more important for a system to transcribe correctly than others. We present and evaluate neural architectures that use acoustic features for 3-class word importance prediction. Our model performs competitively against state-of-the-art text-based word-importance prediction models, and it demonstrates particular benefits when operating on imperfect ASR output., Comment: 8 pages, 2 figures
- Published
- 2019
- Full Text
- View/download PDF
23. CONTEMPORARY MULTIMODAL DATA COLLECTION METHODOLOGY FOR RELIABLE INFERENCE OF AUTHENTIC SURPRISE
- Author
-
Reynold Bailey, Cecilia Ovesdotter Alm, and Jordan Edward Shea
- Subjects
Facial expression ,Modalities ,Data collection ,Computer science ,business.industry ,media_common.quotation_subject ,Intelligent decision support system ,Inference ,Machine learning ,computer.software_genre ,Random forest ,Surprise ,Emotional expression ,Artificial intelligence ,business ,computer ,media_common - Abstract
The need for intelligent systems that can understand and convey human emotional expression is becoming increasingly prevalent. Unfortunately, most datasets for developing such systems rely on acted or exaggerated emotions, or utilize subjective labels obtained from possibly unreliable sources. This paper reports on an innovative data collection methodology for capturing multimodal human signals of authentic surprise. We introduce two tasks with a facilitator to elicit genuine reactions of surprise while co-collecting data from three human modalities: speech, facial expressions, and galvanic skin response. Our work highlights the methodological potential of biophysical measurement-based validation for enabling reliable inference. A case study is presented which provides baseline results for Random Forest classification. Using features gathered from the three modalities, our baseline system is able to identify surprise instances with approximately 20% absolute increase in accuracy compared to random assignment on a balanced dataset.
- Published
- 2018
- Full Text
- View/download PDF
24. Sensing Behaviors of Students in Online vs. Face-to-Face Lecturing Contexts
- Author
-
Rebecca Medina, Daniel Carpenter, Cecilia Ovesdotter Alm, Linwei Wang, Reynold Bailey, and Joe Geigel
- Subjects
Medical education ,Facial expression ,Computer science ,Online learning ,05 social sciences ,050301 education ,Face (sociological concept) ,Face-to-face ,Incentive ,Stress (linguistics) ,ComputingMilieux_COMPUTERSANDEDUCATION ,Task analysis ,0501 psychology and cognitive sciences ,Skin conductance ,0503 education ,050107 human factors - Abstract
University students are often presented with the choice between a traditional classroom and an online learning environment. Given the growing interest in web-based learning, it is essential to understand if students' needs are met in these learning environments. Sensing mechanisms enable realtime monitoring of students' reactions as they view and engage with course content. We use galvanic skin response and facial expression analysis to identify differences in behaviors associated with learning via a face-to-face versus an online lecture. We also explore the effects of incentives on learning. Findings indicate that physiological data recorded during a lecture is a good indicator of content difficulty, potentially providing a way for instructors to adjust their materials and delivery to benefit students' understanding. The data further suggests that subjects react more negatively to online lecturing and that learning incentives may have the adverse effect of increasing stress on students as opposed to improving performance.
- Published
- 2018
- Full Text
- View/download PDF
25. Towards an Affective Video Recommendation System
- Author
-
Ifeoma Nwogu, Cecilia Ovesdotter Alm, Yancarlos Diaz, and Reynold Bailey
- Subjects
Facial expression ,Computer science ,Human–computer interaction ,0206 medical engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,02 engineering and technology ,Video streaming ,Recommender system ,Pulse (music) ,Affective computing ,020601 biomedical engineering - Abstract
Video streaming services are prominent in people’s lives and there is a need for improved video recommendation systems that adapt to their users in a personalized way. This project uses affective computing and non-invasive sensing to address this issue. Our objective is to develop an approach that uses the viewer’s emotional reactions as the basis for recommending new content. To achieve this goal, we must first understand how viewers react to videos. We conducted a study where subjects’ facial expressions and skin-estimated pulse were monitored while watching videos. Results showed that our approach can estimate dominant emotions 70% of the time. We found no correlation between the number of emotional reactions people have and how they rate the videos they watch. The pulse estimation is reliable to measure important changes in pulse, however it can still be improved.
- Published
- 2018
- Full Text
- View/download PDF
26. SNAG: Spoken Narratives and Gaze Dataset
- Author
-
Emily Prud'hommeaux, Preethi Vaidyanathan, Cecilia Ovesdotter Alm, and Jeff B. Pelz
- Subjects
Computer science ,business.industry ,05 social sciences ,02 engineering and technology ,Sensor fusion ,computer.software_genre ,Gaze ,050105 experimental psychology ,Task (project management) ,Stimulus modality ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Narrative ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
Humans rely on multiple sensory modalities when examining and reasoning over images. In this paper, we describe a new multimodal dataset that consists of gaze measurements and spoken descriptions collected in parallel during an image inspection task. The task was performed by multiple participants on 100 general-domain images showing everyday objects and activities. We demonstrate the usefulness of the dataset by applying an existing visual-linguistic data fusion framework in order to label important image regions with appropriate linguistic labels.
- Published
- 2018
- Full Text
- View/download PDF
27. Sensing and Learning Human Annotators Engaged in Narrative Sensemaking
- Author
-
Luke Lapresi, Christopher M. Homan, Cecilia Ovesdotter Alm, McKenna K. Tornblad, and Raymond Ptucha
- Subjects
Facial expression ,business.industry ,Process (engineering) ,Computer science ,05 social sciences ,Perspective (graphical) ,050801 communication & media studies ,02 engineering and technology ,Sensemaking ,Crowdsourcing ,Task (project management) ,0508 media and communications ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Narrative ,business ,Storytelling ,Cognitive psychology - Abstract
While labor issues and quality assurance in crowdwork are increasingly studied, how annotators make sense of texts and how they are personally impacted by doing so are not. We study these questions via a narrative-sorting annotation task, where carefully selected (by sequentiality, topic, emotional content, and length) collections of tweets serve as examples of everyday storytelling. As readers process these narratives, we measure their facial expressions, galvanic skin response, and self-reported reactions. From the perspective of annotator well-being, a reassuring outcome was that the sorting task did not cause a measurable stress response, however readers reacted to humor. In terms of sensemaking, readers were more confident when sorting sequential, target-topical, and highly emotional tweets. As crowdsourcing becomes more common, this research sheds light onto the perceptive capabilities and emotional impact of human readers.
- Published
- 2018
- Full Text
- View/download PDF
28. A dataset for identifying actionable feedback in collaborative software development
- Author
-
Cecilia Ovesdotter Alm, Benjamin S. Meyers, Emily Prud'hommeaux, Pradeep K. Murukannaiah, Andrew Meneely, Nuthan Munaiah, and Josephine Wolff
- Subjects
Code review ,Computer science ,business.industry ,05 social sciences ,02 engineering and technology ,Collaborative software development ,computer.software_genre ,Data science ,Outcome (game theory) ,050105 experimental psychology ,Task (project management) ,Software ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,business ,Classifier (UML) ,computer - Abstract
Software developers and testers have long struggled with how to elicit proactive responses from their coworkers when reviewing code for security vulnerabilities and errors. For a code review to be successful, it must not only identify potential problems but also elicit an active response from the colleague responsible for modifying the code. To understand the factors that contribute to this outcome, we analyze a novel dataset of more than one million code reviews for the Google Chromium project, from which we extract linguistic features of feedback that elicited responsive actions from coworkers. Using a manually-labeled subset of reviewer comments, we trained a highly accurate classifier to identify acted-upon comments (AUC = 0.85). Our results demonstrate the utility of our dataset, the feasibility of using NLP for this new task, and the potential of NLP to improve our understanding of how communications between colleagues can be authored to elicit positive, proactive responses.
- Published
- 2018
- Full Text
- View/download PDF
29. Team-based, transdisciplinary, and inclusive practices for undergraduate research
- Author
-
Cecilia Ovesdotter Alm and Reynold Bailey
- Subjects
Prioritization ,Liberal arts education ,Undergraduate research ,0202 electrical engineering, electronic engineering, information engineering ,020207 software engineering ,020201 artificial intelligence & image processing ,Cognition ,Engineering ethics ,Context (language use) ,02 engineering and technology ,Interrogation ,Electronic mail ,Theme (narrative) - Abstract
We present work-in-progress reflecting on the initial year of a distinctive summer Research Experiences for Undergraduates (REU) program. Our REU model combines fundamental research in computational sensing with a scholarly context that connects computer science with computational liberal arts. Students are intellectually stimulated to make sense of people's behaviors and cognitive processes with multimodal sensing hardware and software. In doing so, they explore the fundamental challenges found at the intersection of computing, the human experience, and scientific interrogation. The placement of the human experience at the core of the research theme enables an environment that stimulates and cultivates an innovative undergraduate research model. We highlight outcomes from the first year and discuss three emerging practices that are central to our REU framework: (1) team-based collaborative training; (2) transdisciplinary integration; and (3) systematic prioritization of inclusiveness. We also describe how these practices are incorporated into our overall undergraduate research framework and touch upon lessons learned from feedback collected.
- Published
- 2017
- Full Text
- View/download PDF
30. Sensor-based Methodological Observations for Studying Online Learning
- Author
-
Reynold Bailey, Ashley A. Edwards, Anthony Massicci, Cecilia Ovesdotter Alm, Linwei Wang, Joe Geigel, and Srinivas Sridharan
- Subjects
030506 rehabilitation ,Facial expression ,Online learning ,05 social sciences ,050301 education ,Unstructured data ,Popularity ,Synchronous learning ,03 medical and health sciences ,Human–computer interaction ,Intervention (counseling) ,Natural (music) ,Eye tracking ,0305 other medical science ,Psychology ,0503 education ,Social psychology - Abstract
Online learning has gained increased popularity in recent years. However, with online learning, teacher observation and intervention is lost, creating a need for technologically observable characteristics that can compensate for this limitation. The present study used a wide array of sensing mechanisms including eye tracking, galvanic skin response (GSR) recording, facial expression analysis, and summary note-taking to monitor participants while they watched and recalled an online video lecture. We explored the link between these human-elicited responses and learning outcomes as measured by quiz questions. Results revealed GSR to be the best indicator of the challenge level of the lecture material. Yet, eye tracking and GSR remain difficult to capture when monitoring online learning as the requirement to remain still impacts natural behavior and leads to more stoic and unexpressive faces. Continued work on methods ensuring naturalistic capture are critical for broadening the use of sensor technology in online learning, as are ways to fuse these data with other input, such as structured and unstructured data from peer-to-peer or student-teacher interactions.
- Published
- 2017
- Full Text
- View/download PDF
31. Linguistic Analysis of Clinical Communications
- Author
-
Esa M. Rantanen, Nick Iuliucci, Cecilia Ovesdotter Alm, Tracy R. Worrell, and Nancy Valentage
- Subjects
Clinical consultation ,business.industry ,Management science ,Perspective (graphical) ,Applied psychology ,Objective measurement ,Behavioral pattern ,Ocean Engineering ,Cognition ,Time pressure ,Linguistic analysis ,Health care ,Medicine ,business - Abstract
This project took a unique perspective on the investigation of decision making in healthcare by examining clinician-patient consultation using methods from linguistics and communication. Our goal was to identify cognitive and behavioral patterns in interactions of clinicians-in-trainings with patients that correspond to decision making processes in time constrained situations. Objective measures of clinician-patient communications in both naturalistic and simulated settings is a promising way to examine clinician decision making under uncertainty and time pressure, and to identify specific errors in decision making that may lead to misdiagnoses. We sought to detect distinguishable patterns of communication adopted by clinicians-in-training in clinical consultation settings under time pressure. A pilot study reported in this paper provides objective measurement tools to study miscommunication within clinician-patient consultations and may positively influence the treatments and services offered within patient consultations.
- Published
- 2014
- Full Text
- View/download PDF
32. Natural Language Insights from Code Reviews that Missed a Vulnerability
- Author
-
Josephine Wolff, Yang Yu, Emily Prud'hommeaux, Nuthan Munaiah, Cecilia Ovesdotter Alm, Andrew Meneely, Benjamin S. Meyers, and Pradeep K. Murukannaiah
- Subjects
Code review ,Computer science ,business.industry ,Vulnerability ,Software development ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Naive Bayes classifier ,Software ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Software system ,Artificial intelligence ,business ,computer ,Classifier (UML) ,Natural language ,Natural language processing - Abstract
Engineering secure software is challenging. Software development organizations leverage a host of processes and tools to enable developers to prevent vulnerabilities in software. Code reviewing is one such approach which has been instrumental in improving the overall quality of a software system. In a typical code review, developers critique a proposed change to uncover potential vulnerabilities. Despite best efforts by developers, some vulnerabilities inevitably slip through the reviews. In this study, we characterized linguistic features—inquisitiveness, sentiment and syntactic complexity—of conversations between developers in a code review, to identify factors that could explain developers missing a vulnerability. We used natural language processing to collect these linguistic features from 3,994,976 messages in 788,437 code reviews from the Chromium project. We collected 1,462 Chromium vulnerabilities to empirically analyze the linguistic features. We found that code reviews with lower inquisitiveness, higher sentiment, and lower complexity were more likely to miss a vulnerability. We used a Naive Bayes classifier to assess if the words (or lemmas) in the code reviews could differentiate reviews that are likely to miss vulnerabilities. The classifier used a subset of all lemmas (over 2 million) as features and their corresponding TF-IDF scores as values. The average precision, recall, and F-measure of the classifier were 14%, 73%, and 23%, respectively. We believe that our linguistic characterization will help developers identify problematic code reviews before they result in a vulnerability being missed.
- Published
- 2017
- Full Text
- View/download PDF
33. Understanding the Semantics of Narratives of Interpersonal Violence through Reader Annotations and Physiological Reactions
- Author
-
Cecilia Ovesdotter Alm, Raymond Ptucha, Christopher M. Homan, Elizabeth A. Pruett, and Alexander Calderwood
- Subjects
Coreference ,Semantic role labeling ,Stakeholder ,Narrative ,Text annotation ,Social media ,Affect (linguistics) ,Semantics ,Psychology ,Linguistics - Abstract
Interpersonal violence (IPV) is a prominent sociological problem that affects people of all demographic backgrounds. By analyzing how readers interpret, perceive, and react to experiences narrated in social media posts, we explore an understudied source for discourse about abuse. We asked readers to annotate Reddit posts about relationships with vs. without IPV for stakeholder roles and emotion, while measuring their galvanic skin response (GSR), pulse, and facial expression. We map annotations to coreference resolution output to obtain a labeled coreference chain for stakeholders in texts, and apply automated semantic role labeling for analyzing IPV discourse. Findings provide insights into how readers process roles and emotion in narratives. For example, abusers tend to be linked with violent actions and certain affect states. We train classifiers to predict stakeholder categories of coreference chains. We also find that subjects’ GSR noticeably changed for IPV texts, suggesting that co-collected measurement-based data about annotators can be used to support text annotation.
- Published
- 2017
- Full Text
- View/download PDF
34. An Analysis and Visualization Tool for Case Study Learning of Linguistic Concepts
- Author
-
Benjamin S. Meyers, Cecilia Ovesdotter Alm, and Emily Prud'hommeaux
- Subjects
Computer science ,business.industry ,02 engineering and technology ,Linguistics ,Conjunction (grammar) ,Visualization ,03 medical and health sciences ,Information visualization ,0302 clinical medicine ,Coursework ,ComputingMilieux_COMPUTERSANDEDUCATION ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computational linguistics ,business ,030217 neurology & neurosurgery - Abstract
We present an educational tool that integrates computational linguistics resources for use in non-technical undergraduate language science courses. By using the tool in conjunction with evidence-driven pedagogical case studies, we strive to provide opportunities for students to gain an understanding of linguistic concepts and analysis through the lens of realistic problems in feasible ways. Case studies tend to be used in legal, business, and health education contexts, but less in the teaching and learning of linguistics. The approach introduced also has potential to encourage students across training backgrounds to continue on to computational language analysis coursework.
- Published
- 2017
- Full Text
- View/download PDF
35. Understanding Discourse on Work and Job-Related Well-Being in Public Social Media
- Author
-
Cecilia Ovesdotter Alm, Megan C. Lytle, Tong Liu, Christopher M. Homan, Ann Marie White, and Henry Kautz
- Subjects
business.industry ,Computer science ,Civil discourse ,05 social sciences ,Supervised learning ,Media relations ,Crowdsourcing ,Data science ,Article ,Text mining ,0502 economics and business ,Ethnography ,Well-being ,050211 marketing ,0501 psychology and cognitive sciences ,Social media ,Construct (philosophy) ,business ,050107 human factors - Abstract
We construct a humans-in-the-loop supervised learning framework that integrates crowdsourcing feedback and local knowledge to detect job-related tweets from individual and business accounts. Using data-driven ethnography, we examine discourse about work by fusing language-based analysis with temporal, geospational, and labor statistics information.
- Published
- 2016
36. The Role of Affect in the Computational Modeling of Natural Language
- Author
-
Cecilia Ovesdotter Alm
- Subjects
Linguistics and Language ,Feeling ,media_common.quotation_subject ,Component (UML) ,Relevance (information retrieval) ,Affect (linguistics) ,Computational linguistics ,Psychology ,Affective computing ,Linguistics ,Natural language ,media_common ,Quantitative linguistics - Abstract
Expressivity is an intrinsic component of natural language. This article follows the tradition of affective computing (Picard 1997) in using affect to refer to connected concepts such as emotion, mood, feelings, personality, attitude, polarity, and related subjective phenomena. The overview clarifies the relevance of affect for linguistics and computational linguistics, summarizes useful background, outlines helpful resources, and highlights important considerations for computational modeling of affect in language and affect-related linguistic behaviors. In addition, the article sketches unsettled debates, topics, problems, and areas in need of further exploration.
- Published
- 2012
- Full Text
- View/download PDF
37. Generating Clinically Relevant Texts: A Case Study on Life-Changing Events
- Author
-
Anil Behera, Cecilia Ovesdotter Alm, Titus Thomas, Christopher M. Homan, Raymond Ptucha, Emily Prud'hommeaux, and Mayuresh Oak
- Subjects
Cognitive science ,Computer science ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,02 engineering and technology - Published
- 2016
- Full Text
- View/download PDF
38. Towards Early Dementia Detection: Fusing Linguistic and Non-Linguistic Clinical Data
- Author
-
Cecilia Ovesdotter Alm, Qi Yu, Joseph Bullard, Ruben A. Proano, and Xumin Liu
- Subjects
Population ageing ,business.industry ,Computer science ,Treatment options ,Early detection ,Diagnostic marker ,010501 environmental sciences ,medicine.disease ,01 natural sciences ,Linguistics ,03 medical and health sciences ,Identification (information) ,0302 clinical medicine ,Early dementia ,Health care ,medicine ,Dementia ,030212 general & internal medicine ,business ,Clinical record ,0105 earth and related environmental sciences - Abstract
Dementia is an increasing problem for an aging population, with a lack of available treatment options, as well as expensive patient care. Early detection is critical to eventually postpone symptoms and to prepare health care providers and families for managing a patient’s needs. Identification of diagnostic markers may be possible with patients’ clinical records. Text portions of clinical records are integrated into predictive models of dementia development in order to gain insights towards automated identification of patients who may benefit from providers’ early assessment. Results support the potential power of linguistic records for predicting dementia status, both in the absence of, and in complement to, corresponding structured nonlinguistic data.
- Published
- 2016
- Full Text
- View/download PDF
39. An Expert-in-the-loop Paradigm for Learning Medical Image Grouping
- Author
-
Rui Li, Pengcheng Shi, Cara Calvelli, Anne R. Haake, Cecilia Ovesdotter Alm, Qi Yu, and Xuan Guo
- Subjects
Visual analytics ,Pixel ,Interface (Java) ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,020207 software engineering ,02 engineering and technology ,Machine learning ,computer.software_genre ,Image (mathematics) ,Visualization ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Domain knowledge ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer - Abstract
Image grouping in knowledge-rich domains is challenging, since domain knowledge and expertise are key to transform image pixels into meaningful content. Manually marking and annotating images is not only labor-intensive but also ineffective. Furthermore, most traditional machine learning approaches cannot bridge this gap for the absence of experts’ input. We thus present an interactive machine learning paradigm that allows experts to become an integral part of the learning process. This paradigm is designed for automatically computing and quantifying interpretable grouping of dermatological images. In this way, the computational evolution of an image grouping model, its visualization, and expert interactions form a loop to improve image grouping. In our paradigm, dermatologists encode their domain knowledge about the medical images by grouping a small subset of images via a carefully designed interface. Our learning algorithm automatically incorporates these manually specified connections as constraints for re-organizing the whole image dataset. Performance evaluation shows that this paradigm effectively improves image grouping based on expert knowledge.
- Published
- 2016
- Full Text
- View/download PDF
40. Stressed out: what speech tells us about stress
- Author
-
Cecilia Ovesdotter Alm, Will Paul, Joe Geigel, Linwei Wang, and Reynold Bailey
- Subjects
Computer science ,Stress (linguistics) ,Cognitive psychology - Published
- 2015
- Full Text
- View/download PDF
41. Anna Filipi. Toddler and Parent Interaction: The Organization of Gaze, Pointing and Vocalization. Amsterdam, The Netherlands/Philadelphia, Pennsylvania: John Benjamins Publishing Company. 2009. xiii + 268 pp. + video clips. Hb (9789027254368) $143.00
- Author
-
Cecilia Ovesdotter Alm
- Subjects
Linguistics and Language ,Sociology and Political Science ,business.industry ,media_common.quotation_subject ,Media studies ,Art ,Gaze ,Language and Linguistics ,Visual arts ,Philosophy ,History and Philosophy of Science ,Publishing ,Toddler ,CLIPS ,business ,computer ,media_common ,computer.programming_language - Published
- 2013
- Full Text
- View/download PDF
42. English in the Ecuadorian commercial context
- Author
-
Cecilia Ovesdotter Alm
- Subjects
Linguistics and Language ,Affirmative action ,Data collection ,Sociology and Political Science ,business.industry ,media_common.quotation_subject ,Distribution (economics) ,Context (language use) ,Public relations ,Language and Linguistics ,Linguistics ,Disadvantaged ,Anthropology ,Capital (economics) ,Sociology ,Empowerment ,business ,Socioeconomic status ,media_common - Abstract
This paper presents a study completed in Quito, Ecuador's capital, in 2002. It investigates the attitudinal perceptions toward English in advertising in this context, as well as the actual distribution of English in magazine ads and commercial names of business establishments. The findings are the result of four data collection procedures: first, a questionnaire administered to advertising experts; second, an analysis of business names in ten shopping centers; third, an analysis of advertisements in Ecuadorian magazines; and fourth, an interview survey with the same group of advertising experts. The results are analyzed both quantitatively and qualitatively, with the aim to provide an attitudinal sociolinguistic profile of English in Ecuador from a descriptive, comparative and critical perspective. Adopting the socioeconomic framework presented by Bourdieu (1991), English is found to represent commercial capital. Moreover, English is shown to be highly stratified according to socioeconomic strata, and to function as a segmentizer and a gatekeeper on the Ecuadorian market. Thus, if English is to succeed in functioning as empowerment (cf. Friedrich, 2001) among the disadvantaged in Ecuador in the future, affirmative action is needed, especially within the educational sector.
- Published
- 2003
- Full Text
- View/download PDF
43. Toddler and Parent Interaction: The Organization of Gaze, Pointing and Vocalization (Pragmatics & Beyond New Series, Volume 192) by Anna Filipi
- Author
-
Cecilia Ovesdotter Alm
- Subjects
Linguistics and Language ,Philosophy ,History and Philosophy of Science ,Sociology and Political Science ,Series (mathematics) ,Pragmatics ,Toddler ,Psychology ,Gaze ,Language and Linguistics ,Cognitive psychology ,Volume (compression) - Published
- 2012
- Full Text
- View/download PDF
44. Inference from Structured and Unstructured Electronic Medical Data for Dementia Detection
- Author
-
Joseph Bullard, Rohan Murde, Qi Yu, Cecilia Ovesdotter Alm, and Rubén Proaño
- Published
- 2015
- Full Text
- View/download PDF
45. An Analysis of Domestic Abuse Discourse on Reddit
- Author
-
Raymond Ptucha, Cecilia Ovesdotter Alm, Christopher M. Homan, and Nicolas Schrading
- Subjects
Race (biology) ,Class (computer programming) ,Qualitative analysis ,music.instrument ,Work (electrical) ,Computer science ,Abusive relationship ,Domestic violence ,Criminology ,music ,Baseline (configuration management) - Abstract
Domestic abuse affects people of every race, class, age, and nation. There is significant research on the prevalence and effects of domestic abuse; however, such research typically involves population-based surveys that have high financial costs. This work provides a qualitative analysis of domestic abuse using data collected from the social and news-aggregation website reddit.com. We develop classifiers to detect submissions discussing domestic abuse, achieving accuracies of up to 92%, a substantial error reduction over its baseline. Analysis of the top features used in detecting abuse discourse provides insight into the dynamics of abusive relationships.
- Published
- 2015
- Full Text
- View/download PDF
46. Computational Integration of Human Vision and Natural Language through Bitext Alignment
- Author
-
Cecilia Ovesdotter Alm, Jeff B. Pelz, Anne R. Haake, Emily Prud'hommeaux, and Preethi Vaidyanathan
- Subjects
business.industry ,Computer science ,Speech recognition ,Artificial intelligence ,computer.software_genre ,business ,computer ,Natural language ,Natural language processing - Published
- 2015
- Full Text
- View/download PDF
47. On Utilizing Nonstandard Abbreviations and Lexicon to Infer Demographic Attributes of Twitter Users
- Author
-
Manjeet Rege, Cecilia Ovesdotter Alm, and Nathaniel Mosely
- Subjects
Phrase ,Computer science ,business.industry ,Perspective (graphical) ,Inference ,Lexicon ,computer.software_genre ,Range (mathematics) ,Artificial intelligence ,Baseline (configuration management) ,business ,computer ,Word (computer architecture) ,Natural language processing - Abstract
Automatically determining demographic attributes of writers with high accuracy, based on their texts, can be useful for a range of application domains, including smart ad placement, security, the discovery of predator behaviors, enabling automatic enhancement of participants’ profiles for extended analysis, and various other applications. It is also of interest from the perspective to linguists who may wish to build on such inference for further sociolinguistic analysis. Previous work indicates that attributes such as author gender can be determined with some amount of success, using various methods, such as analysis of shallow linguistic patterns or topic, in authors’ written texts. Author age appears more difficult to determine, but previous research has been somewhat successful at classifying age as a binary (e.g. over or under 30), ternary, or even as a continuous variable using various techniques. In this work, we show that word and phrase abbreviation patterns can be used toward determining user age using novel binning, as well as toward determining binary user gender, and ternary user education level. Notable results include age classification accuracy of up to 83% (67% above relative majority class baseline) using a support vector machine classifier and PCA extracted features, including n-grams. User ages were classified into 10 equally sized age bins and achieved 51% accuracy (34% above baseline) when using only abbreviation features. Gender classification achieved 75% accuracy (13% above baseline) using only abbreviation features, PCA extracted, and education classification achieved 62% accuracy, 19% above baseline with PCA extracted abbreviation features. Also presented is an analysis of the evident change in author abbreviation use over time on Twitter.
- Published
- 2015
- Full Text
- View/download PDF
48. #WhyIStayed, #WhyILeft: Microblogging to Make Sense of Domestic Abuse
- Author
-
Christopher M. Homan, Nicolas Schrading, Raymond Ptucha, and Cecilia Ovesdotter Alm
- Subjects
music.instrument ,Computer science ,business.industry ,Microblogging ,Abusive relationship ,Internet privacy ,Domestic violence ,Social media ,music ,business ,Baseline (configuration management) ,Classifier (UML) - Abstract
In September 2014, Twitter users unequivocally reacted to the Ray Rice assault scandal by unleashing personal stories of domestic abuse via the hashtags #WhyIStayed or #WhyILeft. We explore at a macro-level firsthand accounts of domestic abuse from a substantial, balanced corpus of tweeted instances designated with these tags. To seek insights into the reasons victims give for staying in vs. leaving abusive relationships, we analyze the corpus using linguistically motivated methods. We also report on an annotation study for corpus assessment. We perform classification, contributing a classifier that discriminates between the two hashtags exceptionally well at 82% accuracy with a substantial error reduction over its baseline.
- Published
- 2015
- Full Text
- View/download PDF
49. Fusing Multimodal Human Expert Data to Uncover Hidden Semantics
- Author
-
Xuan Guo, Qi Yu, Cecilia Ovesdotter Alm, Rui Li, and Anne R. Haake
- Subjects
business.industry ,Computer science ,Semantic analysis (machine learning) ,Semantics ,Sensor fusion ,Machine learning ,computer.software_genre ,External Data Representation ,Multimodality ,Non-negative matrix factorization ,Eye tracking ,Artificial intelligence ,Cluster analysis ,business ,computer - Abstract
Problem solving in complex visual domains involves multiple levels of cognitive processing. Analyzing and representing these cognitive processes requires the elicitation and study of multimodal human data. We have developed methods for extracting experts' visual behaviors and verbal descriptions during medical image inspection. Now we address fusion of these data towards building a novel framework for organizing elements of expertise as a foundation for knowledge-dependent computational systems. In this paper, a multimodal graph-regularized non-negative matrix factorization approach is developed and used to fuse multimodal data collected during medical image inspection. Our experimental results on new data representation demonstrate the effectiveness of the proposed data fusion approach.
- Published
- 2014
- Full Text
- View/download PDF
50. Toward inferring the age of Twitter users with their use of nonstandard abbreviations and lexicon
- Author
-
Nathaniel Moseley, Cecilia Ovesdotter Alm, and Manjeet Rege
- Subjects
Continuous variable ,Range (mathematics) ,Phrase ,Information retrieval ,Computer science ,Support vector machine classifier ,Lexicon ,Baseline (configuration management) ,Word (computer architecture) ,Majority class - Abstract
Automatically determining demographic profile attributes of writers with high accuracy, based on their texts, can be useful for a range of application domains, including smart ad placement, security, the discovery of predator behaviors, enabling automatic enhancement of participants profiles for extended analysis, and various other applications. Attributes such as author gender can be determined with some amount of success from many sources, using various methods, such as analysis of shallow linguistic patterns or topic. Author age is more difficult to determine, but previous research has been somewhat successful at classifying age as a binary (e.g. over or under 30), ternary, or even as a continuous variable using various techniques. In this work, we show that word and phrase abbreviation patterns can be used toward determining user age using novel binning. Notable results include classification accuracy of up to 82.8%, which was 67.0% above relative majority class baseline when classifying user ages into 10 equally sized age bins using a support vector machine classifier and PCA extracted features (including n-grams) and 50.8% (33.7% above baseline) when using only abbreviation features. Also presented is an analysis of the evident change in abbreviation use over time on Twitter.
- Published
- 2014
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.