14 results on '"Watanabe, Micah"'
Search Results
2. Age of Exposure 2.0: Estimating Word Complexity Using Iterative Models of Word Embeddings
- Author
-
Botarleanu, Robert-Mihai, Dascalu, Mihai, Watanabe, Micah, Crossley, Scott Andrew, and McNamara, Danielle S.
- Abstract
Age of acquisition (AoA) is a measure of word complexity which refers to the age at which a word is typically learned. AoA measures have shown strong correlations with reading comprehension, lexical decision times, and writing quality. AoA scores based on both adult and child data have limitations that allow for error in measurement, and increase the cost and effort to produce. In this paper, we introduce Age of Exposure (AoE) version 2, a proxy for human exposure to new vocabulary terms that expands AoA word lists through training regressors to predict AoA scores. Word2vec word embeddings are trained on cumulatively increasing corpora of texts, word exposure trajectories are generated by aligning the word2vec vector spaces, and features of words are derived for modeling AoA scores. Our prediction models achieve low errors (from 13% with a corresponding R[superscript 2] of 0.35 up to 7% with an R[superscript 2] of 0.74), can be uniformly applied to different AoA word lists, and generalize to the entire vocabulary of a language. Our method benefits from using existing readability indices to define the order of texts in the corpora, while the performed analyses confirm that the generated AoA scores accurately predicted the difficulty of texts (R[superscript 2] of 0.84, surpassing related previous work). Further, we provide evidence of the internal reliability of our word trajectory features, demonstrate the effectiveness of the word trajectory features when contrasted with simple lexical features, and show that the exclusion of features that rely on external resources does not significantly impact performance. [This is the online first version of an article published in "Behavior Research Methods."]
- Published
- 2022
- Full Text
- View/download PDF
3. Multilingual Age of Exposure
- Author
-
Botarleanu, Robert-Mihai, Dascalu, Mihai, Watanabe, Micah, McNamara, Danielle S., and Crossley, Scott Andrew
- Abstract
The ability to objectively quantify the complexity of a text can be a useful indicator of how likely learners of a given level will comprehend it. Before creating more complex models of assessing text difficulty, the basic building block of a text consists of words and, inherently, its overall difficulty is greatly influenced by the complexity of underlying words. One approach is to measure a word's Age of Acquisition (AoA), an estimate of the average age at which a speaker of a language understands the semantics of a specific word. Age of Exposure (AoE) statistically models the process of word learning, and in turn an estimate of a given word's AoA. In this paper, we expand on the model proposed by AoE by training regression models that learn and generalize AoA word lists across multiple languages including English, German, French, and Spanish. Our approach allows for the estimation of AoA scores for words that are not found in the original lists, up to the majority of the target language's vocabulary. Our method can be uniformly applied across multiple languages though the usage of parallel corpora and helps bridge the gap in the size of AoA word lists available for non-English languages. This effort is particularly important for efforts toward extending AI to languages with fewer resources and benchmarked corpora. [This paper was published in: "AIED 2021," edited by I. Roll et al., Springer Nature Switzerland AG, 2021, pp. 77-87.]
- Published
- 2021
- Full Text
- View/download PDF
4. The Automated Model of Comprehension Version 3.0: Paying Attention to Context
- Author
-
Corlatescu, Dragos, Watanabe, Micah, Ruseti, Stefan, Dascalu, Mihai, and McNamara, Danielle S.
- Abstract
Reading comprehension is essential for both knowledge acquisition and memory reinforcement. Automated modeling of the comprehension process provides insights into the efficacy of specific texts as learning tools. This paper introduces an improved version of the Automated Model of Comprehension, version 3.0 (AMoC v3.0). AMoC v3.0 is based on two theoretical models of the comprehension process, namely the Construction-Integration and the Landscape models. In addition to the lessons learned from the previous versions, AMoC v3.0 uses Transformer-based contextualized embeddings to build and update the concept graph as a simulation of reading. Besides taking into account generative language models and presenting a visual walkthrough of how the model works, AMoC v3.0 surpasses the previous version in terms of the Spearman correlations between our activation scores and the values reported in the original Landscape Model for the presented use case. Moreover, features derived from AMoC significantly differentiate between high-low cohesion texts, thus arguing for the model's capabilities to simulate different reading conditions. [This paper was published in: "AIED 2023: Artificial Intelligence in Education," edited by N. Wang et al., Springer, Switzerland, 2023, pp. 229-241, 2023.]
- Published
- 2023
- Full Text
- View/download PDF
5. The Motivational Utility of Knowledge: Examining Fundamental Needs in the Context of Houselessness Knowledge
- Author
-
Watanabe, Micah and McNamara, Danielle S.
- Abstract
Past research on knowledge has differentiated between dimensions (e.g., amount, accuracy, specificity, coherence) of knowledge. This paper introduces a novel dimension of knowledge, the Motivational Utility of Knowledge (MUK), that is based on fundamental human needs (e.g., physical safety, affiliation, actualization, reproduction). Adults in the United States (N = 190) were recruited from an online survey platform and paid for participation. Participants read a set of four texts arguing different views of houselessness and were administered a comprehension test after each text. Participants were asked about their conceptions of houselessness before and after reading. Finally, they were given the MUK scale, a demographics questionnaire, including questions about their personal experience with houselessness, and were administered a general prior knowledge test and a vocabulary knowledge test. We examined MUK, the factor structure of the scale and the relationship between MUK and other measures of knowledge. The analyses showed that the subscales of MUK loaded onto a single factor--an overall value of houselessness knowledge. In addition, we found that MUK was correlated with conceptions of houselessness and comprehension of texts on houselessness, indicating that the scale was valid. Overall, the findings demonstrate that MUK is an important dimension of knowledge to consider in learning tasks.
- Published
- 2023
- Full Text
- View/download PDF
6. Summarizing versus Rereading Multiple Documents
- Author
-
McNamara, Danielle S., Watanabe, Micah, Huynh, Linh, McCarthy, Kathryn S., Allen, Larua K., and Magliano, Joseph P.
- Abstract
Writing an integrated essay based on multiple-documents requires students to both comprehend the documents and integrate the documents into a coherent essay. In the current study, we examined the effects of summarization as a potential reading strategy to enhance participants' multiple-document comprehension and integrated essay writing. Participants (n= 295) were randomly assigned to either summarize or reread five texts on sun exposure and radiation. They produced an integrated essay based on the texts that they read, which were scored by expert raters. Finally, the participants completed three knowledge assessments (topic, domain, general). Readers who summarized texts had lower essay scores than readers who reread the texts. However, within the summary group, summary quality was positively correlated with essay score. These findings are discussed within the context of multiple-document comprehension and writing skill. [This paper will be published in "Contemporary Educational Psychology."]
- Published
- 2023
- Full Text
- View/download PDF
7. Personalized Learning in iSTART: Past Modifications and Future Design
- Author
-
McCarthy, Kathryn S., Watanabe, Micah, Dai, Jianmin, and McNamara, Danielle S.
- Abstract
Computer-based learning environments (CBLEs) provide unprecedented opportunities for personalized learning at scale. One such system, iSTART (Interactive Strategy Training for Active Reading and Thinking) is an adaptive, game-based tutoring system for reading comprehension. This paper describes how efforts to increase personalized learning have improved the system. It also provides results of a recent implementation of an adaptive logic that increases or decreases text difficulty based on students' performance rather than presenting texts randomly. High school students who received adaptive text selection showed increased sense of learning. Adaptive text selection also resulted in greater pre-training to post-training comprehension test gains, especially for less-skilled readers. The findings demonstrate that system-driven, just-in-time support consistent with the goals of personalized learning benefit the efficacy of computer-based learning environments. [This paper will be published in "Journal of Research on Technology in Education."]
- Published
- 2020
- Full Text
- View/download PDF
8. The Design Implementation Framework: Guiding Principles for the Redesign of a Reading Comprehension Intelligent Tutoring System
- Author
-
McCarthy, Kathryn S., Watanabe, Micah, and McNamara, Danielle S.
- Abstract
The Design Implementation Framework, or DIF, is a design approach that evaluates learner and user experience at multiple points in the development of intelligent tutoring systems. In this chapter, we explore how DIF was used to make system modifications to iSTART, a game-based intelligent tutoring system for reading comprehension. Using DIF as a guide, we conducted internal testing, focus groups, and usability walk-throughs to develop iSTART-3, the latest iteration of iSTART. In addition to these evaluations, DIF highlights the need for experimental evaluation. With this in mind, we describe an experimental evaluation of iSTART-3 as compared to its predecessor, iSTART-ME2. Analyses revealed an interesting tension between system usability and user preference that has important implications for instructional designers. [This chapter was published in: M. Schmidt, A. A. Tawfik, I. Jahnke, & Y. Earnshaw (Eds.), "Learner and User Experience Research: An Introduction for the Field of Learning Design & Technology." EdTech Books.]
- Published
- 2020
9. The 'LO'-Down on Grit: Non-Cognitive Trait Assessments Fail to Predict Learning Gains in iSTART and W-Pal
- Author
-
McCarthy, Kathryn S., Likens, Aaron D., Kopp, Kristopher K., Perret, Cecile A., Watanabe, Micah, and McNamara, Danielle S.
- Abstract
The current study explored relations between non-cognitive traits (Grit, Learning Orientation, Performance Orientation), reading skill, and performance across three experiments conducted in the context of two intelligent tutoring systems, iSTART and Writing Pal. Results showed that learning outcomes (comprehension score, holistic essay score) were moderately to strongly correlated with reading skill. In contrast, these outcomes only weakly correlated with learning orientation (LO) and were unrelated to either Grit or performance orientation (PO). Further, regression analyses indicated that none of the noncognitive traits predicted learning gains. We open the discussion of these findings in the context of theoretical perspectives, construct validity and reliability, and large scale assessment.
- Published
- 2018
10. iSTART: Adaptive Comprehension Strategy Training and Stealth Literacy Assessment
- Author
-
McNamara, Danielle S., Arner, Tracy, Butterfuss, Reese, Fang, Ying, Watanabe, Micah, Newton, Natalie, McCarthy, Kathryn S., Allen, Laura K., and Roscoe, Rod D.
- Abstract
The Interactive Strategy Training for Active Reading and Thinking (iSTART) game-based intelligent tutoring system (ITS) was developed with a foundation of comprehension theory and principles of learning science to improve students' comprehension of complex scientific texts. iSTART has been shown to improve reading comprehension for learners from middle school through adulthood, particularly lower knowledge readers, through strategy instruction and game-based practice. This paper describes iSTART, the theoretical foundations that have guided iSTART development, and evidence for the feasibility of game-based practice to improve learning outcomes. This paper also introduces a novel method of assessing students' reading comprehension through game-based literacy assessments that have been incorporated in iSTART. The development of these stealth assessments was guided by recent work emphasizing the need for rapid, dynamic, and low stakes assessments that evaluate students' reading skills in the context of brief, dynamic games. Stealth assessments can generate estimates of multiple aspects of students' reading comprehension quickly and within a motivating environment. The work described in this paper is a promising method to assess students' literacy in an unobtrusive and authentic way that may lead to improved learning outcomes for students. [This is the online version of an article published in "International Journal of Human-Computer Interaction."]
- Published
- 2022
- Full Text
- View/download PDF
11. Expert Thinking With Generative Chatbots.
- Author
-
Imundo, Megan N., Watanabe, Micah, Potter, Andrew H., Gong, Jiachen, Arner, Tracy, and McNamara, Danielle S.
- Abstract
Artificial intelligence (AI)-driven generative chatbots can produce large quantities of text instantly across a range of domains, using authoritative tones that create the perception of expertise. This critical synthesis compares artificial expertise and human expertise and examines ways in which generative chatbots can support cognition using an expert thinking framework. Findings indicate that generative chatbots may support experts' cognition as a collaborator or to offload lower level tasks. Moreover, generative chatbots are a promising training tool in developing future experts in part because they can provide learning models and practice opportunities. The use of generative chatbots to offload lower level tasks, however, may harm expert development by disrupting knowledge communities. Finally, a lack of domain knowledge in nonexpert users may limit the effectiveness of generative chatbots in supporting higher level cognition and agency. Overall, existing research suggests that the potential for generative chatbots to support users' cognition depends on a user's level of expertise. General Audience Summary: Generative chatbots are artificial intelligence (AI) programs designed to have natural conversations with users. Since the release of ChatGPT (GPT stands for Generative Pre-trained Transformer), generative chatbots have become widely available. Generative chatbots are especially powerful because they are built on computer neural networks and trained on vast amounts of data. In addition, the text they produce can closely resemble expert knowledge and writing on nearly any topic. Consequently, individuals across industries and governments are interested in the potential for generative chatbots to support cognition in experts and nonexperts. This article reviews the history of chatbots, compares human expertise and artificial expertise, and then describes how individuals attain expertise through observing models, completing scaffolded tasks, and engaging in deliberate practice. Afterward, the article discusses how generative chatbots have been used—and could be used in the future—to support cognition (i.e., thoughts and mental processes) for users with varied amounts of domain knowledge (i.e., experts, novices, and laypersons). Research on potential approaches to leveraging generative chatbots to support cognition by these users is primarily drawn from three applied domains: education, medicine, and law. Research with the current generation of generative chatbots like ChatGPT is new and rapidly progressing, but research thus far suggests that (a) the roles that generative chatbots take on to support thinking vary depending on how much knowledge a user has on a particular topic and (b) generative chatbots show promise in supporting experts' cognition and the training of novices who might be future experts in their fields, but (c) laypersons' lack of prior knowledge currently limits generative chatbots' ability to support their thinking and their agency in engaging with unfamiliar domains more broadly. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. iSTART‐Early and Now I Can Read: Effective Reading Strategies for Young Readers.
- Author
-
Watanabe, Micah, Arner, Tracy, and McNamara, Danielle
- Subjects
- *
READING strategies , *CURRICULUM , *COMPREHENSION , *PARAPHRASE , *INTELLIGENT tutoring systems - Abstract
Students in the 3rd and 4th grade often encounter what has been called a reading "slump" when their class curriculums increasingly ask them to comprehend and learn from texts. Students are more likely to struggle if they have not been offered sufficient opportunities to build world and domain knowledge and engage in challenging comprehension tasks while developing their reading skills. Thus, it is essential to give young readers opportunities to build their world and domain knowledge and to teach them comprehension strategies such as asking questions, paraphrasing, and self‐explaining. This paper introduces iSTART‐Early, an intelligent tutoring system designed to provide instruction and practice opportunities for students to learn comprehension strategies and build knowledge about diverse topics. The theoretical foundations, history and efficacy, and design of iSTART‐Early are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. iSTART: Adaptive Comprehension Strategy Training and Stealth Literacy Assessment.
- Author
-
McNamara, Danielle S., Arner, Tracy, Butterfuss, Reese, Fang, Ying, Watanabe, Micah, Newton, Natalie, McCarthy, Kathryn S., Allen, Laura K., and Roscoe, Rod D.
- Subjects
INTELLIGENT tutoring systems ,READING comprehension ,EDUCATIONAL outcomes ,SCIENCE students ,LITERACY - Abstract
The Interactive Strategy Training for Active Reading and Thinking (iSTART) game-based intelligent tutoring system (ITS) was developed with a foundation of comprehension theory and principles of learning science to improve students' comprehension of complex scientific texts. iSTART has been shown to improve reading comprehension for learners from middle school through adulthood, particularly lower knowledge readers, through strategy instruction and game-based practice. This paper describes iSTART, the theoretical foundations that have guided iSTART development, and evidence for the feasibility of game-based practice to improve learning outcomes. This paper also introduces a novel method of assessing students' reading comprehension through game-based literacy assessments that have been incorporated in iSTART. The development of these stealth assessments was guided by recent work emphasizing the need for rapid, dynamic, and low stakes assessments that evaluate students' reading skills in the context of brief, dynamic games. Stealth assessments can generate estimates of multiple aspects of students' reading comprehension quickly and within a motivating environment. The work described in this paper is a promising method to assess students' literacy in an unobtrusive and authentic way that may lead to improved learning outcomes for students. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. The automated model of comprehension version 4.0 – Validation studies and integration of ChatGPT.
- Author
-
Corlatescu, Dragos-Georgian, Watanabe, Micah, Ruseti, Stefan, Dascalu, Mihai, and McNamara, Danielle S.
- Subjects
- *
COMPUTER assisted instruction , *RESEARCH methodology , *NATURAL language processing , *ARTIFICIAL intelligence , *PSYCHOLOGY , *COGNITION , *EDUCATIONAL tests & measurements , *LEARNING strategies , *CONCEPTUAL structures , *PHILOSOPHY of education , *AUTOMATION , *THEORY , *SYSTEM integration - Abstract
Modeling reading comprehension processes is a critical task for Learning Analytics, as accurate models of the reading process can be used to match students to texts, identify appropriate interventions, and predict learning outcomes. This paper introduces an improved version of the Automated Model of Comprehension, namely version 4.0. AMoC has its roots in two theoretical models of the comprehension process (i.e., the Construction-Integration model and the Landscape model), and the new version leverages state-of-the-art Large Language models, more specifically ChatGPT, to have a better contextualization of the text and a simplified construction of the underlying graph model. Besides showcasing the usage of the model, the study introduces three in-depth psychological validations that argue for the model's adequacy in modeling reading comprehension. In these studies, we demonstrated that AMoC is in line with the theoretical background proposed by the Construction-Integration and Landscape models, and it is better at replicating results from previous human psychological experiments than its predecessor. Thus, AMoC v4.0 can be further used as an educational tool to, for example, help teachers design better learning materials personalized for student profiles. Additionally, we release the code from AMoC v4.0 as open source in a Google Collab Notebook and a GitHub repository. • Introduce the Automated Model of Comprehension version 4.0 that integrates state-of-the-art Large Language models - ChatGPT. • Perform 3 in-depth psychological validation studies to support the model's adequacy in modeling reading comprehension. • AMoC v4.0 aligns with theoretical frameworks, namely Construction-Integration and Landscape models. • The model outperforms its predecessor and replicates results from human psychological experiments. • AMoC v4.0 can serve as an educational tool for designing personalized learning materials. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.