8 results
Search Results
2. The Social Consequences of Language Technologies and Their Underlying Language Ideologies
- Author
-
Maria Goldshtein, Jaclyn Ocumpaugh, Andrew Potter, and Rod D. Roscoe
- Abstract
As language technologies have become more sophisticated and prevalent, there have been increasing concerns about bias in natural language processing (NLP). Such work often focuses on the effects of bias instead of sources. In contrast, this paper discusses how normative language assumptions and ideologies influence a range of automated language tools. These underlying assumptions can inform (a) grammar and tone suggestions provided by commercial products, (b) language varieties (e.g., dialects and other norms) taught by language learning technologies, (c) language patterns used by chatbots and similar applications to interact with users. These tools demonstrate considerable technological advancement but are rarely interrogated with regard to the language ideologies they intentionally or implicitly reinforce. We consider prior research on language ideologies and how they may impact (at scale) the large language models (LLMs) that underlie many automated language technologies. Specifically, this paper draws on established theoretical frameworks for understanding how humans typically perceive or judge language varieties and patterns that may differ from their own or their perceived standard. We then discuss how language ideologies can perpetuate social hierarchies and stereotypes, even within seemingly impartial automation. In doing so, we contribute to the emerging literature on how the risks of language ideologies and assumptions can be better understood and mitigated in the design, testing, and implementation of automated language technologies. [This paper was published in: "Universal Access in Human-Computer Interaction (HCII 2024) Proceedings, LNCS 14696," edited by M. Antona and C. Stephanidis, 2024, pp. 271-290.]
- Published
- 2024
- Full Text
- View/download PDF
3. Designing Tools for Caregiver Involvement in Intelligent Tutoring Systems for Middle School Mathematics
- Author
-
Ha Tien Nguyen, Conrad Borchers, Meng Xia, and Vincent Aleven
- Abstract
Intelligent tutoring systems (ITS) can help students learn successfully, yet little work has explored the role of caregivers in shaping that success. Past interventions to support caregivers in supporting their child's homework have been largely disjunct from educational technology. The paper presents prototyping design research with nine middle school caregivers. We ask: (1) what are caregivers' preferences for different prototypes incorporating data-driven recommendations into their math homework support? Integrating caregivers' preferences, we then ask: (2) what are caregivers' perceptions when interacting with a prototype of an intelligent chatbot tool to support students' homework? We found caregivers reported feeling comfortable integrating AI into their practices and appreciated chat-based support for understanding content and effective ITS use. Our results highlight the affordances of ITS data and AI to assist caregivers who would otherwise not be able to support their child's homework, paving the way for more effective and equitable mathematics learning. [This paper will be published in the ISLS2024 proceedings.]
- Published
- 2024
4. Beyond the Obvious Multi-Choice Options: Introducing a Toolkit for Distractor Generation Enhanced with NLI Filtering
- Author
-
Andreea Dutulescu, Stefan Ruseti, Denis Iorga, Mihai Dascalu, and Danielle S. McNamara
- Abstract
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous meanings, or imply the same information. To overcome these challenges, we propose a comprehensive toolkit that integrates various approaches for generating distractors, including leveraging a general knowledge base and employing a T5 LLM. Additionally, we introduce a novel strategy that utilizes natural language inference to increase the accuracy of the generated distractors by removing confusing options. Our models have zero-shot capabilities and achieve good results on the DGen dataset; moreover, the models were fine-tuned and outperformed state-of-the-art methods on the considered dataset. To further extend the analysis, we introduce human annotations with scores for 100 test questions with 1085 distractors in total. The evaluations indicated that our generated options are of high quality, surpass all previous automated methods, and are on par with the ground truth of human-defined alternatives. [This paper was published in: "AIED 2024, LNAI 14830," edited by A. M. Olney et al., Springer Nature Switzerland, 2024, pp. 242-50.]
- Published
- 2024
- Full Text
- View/download PDF
5. Using Self-Regulated Learning Supported by Artificial Intelligence (AI) Chatbots to Develop EFL Student Teachers' Self-Expression and Reflective Writing Skills
- Author
-
Mahmoud M. S. Abdallah
- Abstract
This research study explores the potential of a pedagogical model of self-regulated learning supported with Artificial Intelligence (AI) chatbots to enhance self-expression and reflective writing skills for novice EFL student teachers at Faculty of Education, Assiut University. The study adopted a pre-post quasi-experimental design, that starts with the identification of the necessary self-expression and reflective writing skills for the target participants (50 fresh EFL student teachers at Assiut University who were purposively selected using a screening questionnaire based on their basic IT literacy skills). A pre-test was administered to assess their initial skill levels in self-expression and reflective writing. Then, an intervention was implemented in the form of a pedagogical model designed around the principles of self-regulated learning and situated language learning, which guided the use of AI chatbots (Bing, ChatGPT, and Google Bard). This model was initially piloted on a small sample (n = 10) of EFL student teachers to check validity and reliability and then experimented with the research participants for 8 weeks during the first semester of the academic year 2023/24. Following the intervention, a post-test was conducted to measure the participants' levels of self-expression and reflective writing skills after being exposed to the interventional model, aiming to identify any improvements gained from the intervention. The results indicated a positive effect with noticeable enhancements in the EFL student teachers' skills. This suggests the potential effectiveness of the model in fostering self-expression and reflective writing skills and developing EFL student teachers' general language proficiency and IT literacy. [This paper was published in "Academic Journal of Faculty of Education, Assiut University" v40 n9 p1-50 2024.]
- Published
- 2024
6. The Promises and Pitfalls of Using Language Models to Measure Instruction Quality in Education. EdWorkingPaper No. 24-948
- Author
-
Annenberg Institute for School Reform at Brown University, Paiheng Xu, Jing Liu, Nathan Jones, Julie Cohen, and Wei Ai
- Abstract
Assessing instruction quality is a fundamental component of any improvement efforts in the education system. However, traditional manual assessments are expensive, subjective, and heavily dependent on observers' expertise and idiosyncratic factors, preventing teachers from getting timely and frequent feedback. Different from prior research that focuses on low-inference instructional practices, this paper presents the first study that leverages Natural Language Processing (NLP) techniques to assess multiple high-inference instructional practices in two distinct educational settings: in-person K-12 classrooms and simulated performance tasks for pre-service teachers. This is also the first study that applies NLP to measure a teaching practice that has been demonstrated to be particularly effective for students with special needs. We confront two challenges inherent in NLP-based instructional analysis, including noisy and long input data and highly skewed distributions of human ratings. Our results suggest that pretrained Language Models (PLMs) demonstrate performances comparable to the agreement level of human raters for variables that are more discrete and require lower inference, but their efficacy diminishes with more complex teaching practices. Interestingly, using only teachers' utterances as input yields strong results for student-centered variables, alleviating common concerns over the difficulty of collecting and transcribing high-quality student speech data in in-person teaching settings. Our findings highlight both the potential and the limitations of current NLP techniques in the education domain, opening avenues for further exploration.
- Published
- 2024
7. The Automated Model of Comprehension Version 4.0 -- Validation Studies and Integration of ChatGPT
- Author
-
Dragos-Georgian Corlatescu, Micah Watanabe, Stefan Ruseti, Mihai Dascalu, and Danielle S. McNamara
- Abstract
Modeling reading comprehension processes is a critical task for Learning Analytics, as accurate models of the reading process can be used to match students to texts, identify appropriate interventions, and predict learning outcomes. This paper introduces an improved version of the Automated Model of Comprehension, namely version 4.0. AMoC has its roots in two theoretical models of the comprehension process (i.e., the Construction-Integration model and the Landscape model), and the new version leverages state-of-the-art Large Language models, more specifically ChatGPT, to have a better contextualization of the text and a simplified construction of the underlying graph model. Besides showcasing the usage of the model, the study introduces three in-depth psychological validations that argue for the model's adequacy in modeling reading comprehension. In these studies, we demonstrated that AMoC is in line with the theoretical background proposed by the Construction-Integration and Landscape models, and it is better at replicating results from previous human psychological experiments than its predecessor. Thus, AMoC v4.0 can be further used as an educational tool to, for example, help teachers design better learning materials personalized for student profiles. Additionally, we release the code from AMoC v4.0 as open source in a Google Collab Notebook and a GitHub repository.
- Published
- 2024
- Full Text
- View/download PDF
8. Inside the Boardroom: Evidence from the Board Structure and Meeting Minutes of Community Banks.
- Author
-
Bennett, Rosalind L., Puri, Manju, and Soto, Paul E.
- Subjects
COMMUNITY banks ,CORPORATE governance ,BOARDS of directors ,MEETINGS ,MACHINE learning - Abstract
Community banks are critical for local economies, yet research on their corporate governance has been scarce due to limited data availability. We explore a unique, proprietary dataset of board membership and meeting minutes of failed community banks to present several stylized facts regarding their board structure and meetings. Community bank boards have fewer members and a higher percentage of insiders than larger publicly traded banks, and experience little turnover during normal times. Their meetings are held monthly and span about two hours. During times of distress, community bank boards convene less often in regularly scheduled meetings in lieu of impromptu meetings, experience higher turnover, particularly among their independent directors, and their meeting tone switches from neutral to significantly negative. Board attention during distressed times shifts towards discussion of capital and examination oversight, and away from lending activities and meeting formalities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.