112,087 results on '"Randomized Controlled Trials"'
Search Results
2. Should College Be 'Free'? Evidence on Free College, Early Commitment, and Merit Aid from an Eight-Year Randomized Trial. EdWorkingPaper No. 24-952
- Author
-
Annenberg Institute for School Reform at Brown University, Douglas N. Harris, and Jonathan Mills
- Abstract
We provide evidence about college financial aid from an eight-year randomized trial where high school ninth graders received a $12,000 merit-based grant offer. The program was designed to be free of tuition/fees at community colleges and substantially lower the cost of four-year colleges. During high school, it increased students' college expectations and low-cost effort, but not higher-cost effort, such as class attendance. The program likely increased two-year college graduation, perhaps because of the free college framing, but did not affect overall college entry, graduation, employment, incarceration, or teen pregnancy. Additional analysis helps explain these modest effects and variation in results across prior studies.
- Published
- 2024
3. Challenges and Opportunities of Meta-Analysis in Education Research
- Author
-
Hansford, Nathaniel and Schechter, Rachel L.
- Abstract
Meta-analyses are systematic summaries of research that use quantitative methods to find the mean effect size (standardized mean difference) for interventions. Critics of meta-analysis point out that such analyses can conflate the results of low- and high-quality studies, make improper comparisons and result in statistical noise. All these criticisms are valid for low-quality meta-analyses. However, high-quality meta-analyses correct all these problems. Critics of meta-analysis often suggest that selecting high-quality RCTs is a more valid methodology. However, education RCTs do not show consistent findings, even when all factors are controlled. Education is a social science, and variability is inevitable. Scholars who try to select the best RCTs will likely select RCTs that confirm their bias. High-quality meta-analyses offer a more transparent and rigorous model for determining best practices in education. While meta-analyses are not without limitations, they are the best tool for evaluating educational pedagogies and programs.
- Published
- 2023
4. A Multisite Randomized Study of an Online Learning Approach to High School Credit Recovery: Effects on Student Experiences and Proximal Outcomes
- Author
-
Jordan Rickles, Margaret Clements, Iliana Brodziak de los Reyes, Mark Lachowicz, Shuqiong Lin, and Jessica Heppen
- Abstract
Online credit recovery will likely expand in the coming years as school districts try to address increased course failure rates brought on by the coronavirus pandemic. Some researchers and policymakers, however, raise concerns over how much students learn in online courses, and there is limited evidence about the effectiveness of online credit recovery. This article presents findings from a multisite randomized study, conducted prior to the pandemic, to expand the field's understanding of online credit recovery's effectiveness. Within 24 high schools from a large urban district, the study randomly assigned 1,683 students who failed Algebra 1 or ninth grade English to a summer credit recovery class that either used an online curriculum with in-class teacher support or the school's business-as-usual teacher-directed class. The results suggest that online credit recovery had relatively insignificant effects on student course experiences and content knowledge, but significantly lower credit recovery rates for English. There was limited heterogeneity in effects across students and schools. Non-response on the study-administered student survey and test limit our confidence in the student experience and content knowledge results, but the findings are robust to different approaches to handling the missing data (multiple imputation or listwise deletion). We discuss how the findings add to the evidence base about online credit recovery and the implications for future research. [This paper will be published in "Journal of Research on Educational Effectiveness."]
- Published
- 2023
- Full Text
- View/download PDF
5. The Integration of the Scholarship of Teaching and Learning into the Discipline of Communication Sciences and Disorders
- Author
-
Friberg, Jennifer, Hoepner, Jerry K., Sauerwein, Allison M., and Mandulak, Kerry
- Abstract
McKinney (2018) has argued that for the scholarship of teaching and learning (SoTL) to advance within a discipline, the integration of SoTL must be closely examined and opportunities for growth in SoTL must be recognized and discussed. To that end, this paper reflects on the degree to which SoTL is integrated into communication sciences and disorders (CSD) by examining a variety of topics: perspectives and theories historically valued by our discipline, existing supports for SoTL at various levels (i.e., individual teacher-scholars, departments, institutions, and the CSD discipline as a whole), the application of SoTL findings in teaching and learning. Four specific recommendations are made because of this examination and reflection.
- Published
- 2023
6. Using Auxiliary Data to Boost Precision in the Analysis of A/B Tests on an Online Educational Platform: New Data and New Results
- Author
-
Sales, Adam C., Prihar, Ethan B., Gagnon-Bartsch, Johann A., and Heffernan, Neil T.
- Abstract
Randomized A/B tests within online learning platforms represent an exciting direction in learning sciences. With minimal assumptions, they allow causal effect estimation without confounding bias and exact statistical inference even in small samples. However, often experimental samples and/or treatment effects are small, A/B tests are underpowered, and effect estimates are overly imprecise. Recent methodological advances have shown that power and statistical precision can be substantially boosted by coupling design-based causal estimation to machine-learning models of rich log data from historical users who were not in the experiment. Estimates using these techniques remain unbiased and inference remains exact without any additional assumptions. This paper reviews those methods and applies them to a new dataset including over 250 randomized A/B comparisons conducted within ASSISTments, an online learning platform. We compare results across experiments using four novel deep-learning models of auxiliary data and show that incorporating auxiliary data into causal estimates is roughly equivalent to increasing the sample size by 20% on average, or as much as 50-80% in some cases, relative to t-tests, and by about 10% on average, or as much as 30-50%, compared to cutting-edge machine learning unbiased estimates that use only data from the experiments. We show that the gains can be even larger for estimating subgroup effects, hold even when the remnant is unrepresentative of the A/B test sample, and extend to post-stratification population effects estimators.
- Published
- 2023
7. Designing Field Experiments to Integrate Research on Costs
- Author
-
A. Brooks Bowden
- Abstract
Although experimental evaluations have been labeled the "gold standard" of evidence for policy (U.S. Department of Education, 2003), evaluations without an analysis of costs are not sufficient for policymaking (Monk, 1995; Ross et al., 2007). Funding organizations now require cost-effectiveness data in most evaluations of effects. Yet, there is little guidance on how to integrate research on costs into efficacy or effectiveness evaluations. As a result, research proposals and papers are disjointed in the treatment of costs, implementation, and effects, and studies often miss opportunities to integrate what is learned from the cost component into what is learned about effectiveness. To address this issue, this paper uses common evaluation frameworks to provide guidance for integrating research on costs into the design of field experiments building on the ingredients method (Levin et al., 2018). The goal is to improve study design, resulting in more cohesive, efficient, and higher-quality evaluations.
- Published
- 2023
- Full Text
- View/download PDF
8. A Randomized Control Trial on the Effects of MoBeGo, a Self-Monitoring App for Challenging Behavior
- Author
-
Bruhn, Allison, Wehby, Joseph, Hoffman, Lesa, Estrapala, Sara, Rila, Ashley, Hancock, Eleanor, Van Camp, Alyssa, Sheaffer, Amanda, and Copeland, Bailey
- Abstract
The purpose of this study was to examine the effects of MoBeGo, a mobile self-monitoring app, on the initial and sustained academic engagement and disruptive behavior of third- to eighth-grade students with challenging behavior. Student-teacher pairs (N = 57) were randomly assigned to the treatment (MoBeGo) or control (business-as-usual) condition. We conducted systematic direct observation of students' behavior throughout prebaseline, baseline, intervention, and postintervention conditions of the study. Multivariate multilevel models revealed differential improvement for the MoBeGo group in student outcomes (less disruptive behavior; more academic engagement) from baseline to intervention, as well as successful postintervention effects for disruptive behavior. Limitations, future directions, and implications for practice are discussed.
- Published
- 2022
- Full Text
- View/download PDF
9. Using Mixed Methods to Explore Variations in Impact within RCTs: The Case of Project COMPASS
- Author
-
Edmunds, Julie A., Gicheva, Dora, Thrift, Beth, and Hull, Marie
- Abstract
Randomized controlled trials (RCTs) in education are common as the design allows for an unbiased estimate of the overall impact of a program. As more RCTs are completed, researchers are also noting that an overall average impact may mask substantial variation across sites or groups of individuals. Mixed methods can provide insight and help in unpacking some of the reasons for these variations in impact. This article contributes to the field of mixed methods research by integrating mixed methods into a recently developed conceptual framework for understanding variations in impact. We model the use of this approach within the context of an RCT for online courses that found differences in impact across courses.
- Published
- 2022
- Full Text
- View/download PDF
10. Improvement Testing in the Year up Professional Training Corps Program: Final Grant Report
- Author
-
Abt Associates, Inc., Fein, David, and Maynard, Rebecca A.
- Abstract
In 2015, Abt Associates received a grant from the Institutes for Education Sciences (IES) for a five-year "Development and Innovation" study of PTC. The purposes of the study were to gauge progress in implementing PTC and to develop and test improvements where needed. Fein et al. (2020) summarize the IES study's approach and findings. A subsequent grant from Arnold Ventures provided support for extending the two analyses--to three follow-up years for Study 1 and to four years for Study 2. This report provides findings from these longer-term analyses. Study 1 found no difference in average earnings or months enrolled in college in follow-up Years 2 and 3 between young adults assigned to PTC and their control group counterparts. (As expected, the PTC group earned less and spent more time in college than the control group in Year 1, when participants were still in the program.) The results also show modest increases in receipt of credentials (mostly short-term certificates based on credit earned at partner college during PTC).
- Published
- 2022
11. Improving Oral and Written Narration and Reading Comprehension of Children At-Risk for Language and Literacy Difficulties: Results of a Randomized Clinical Trial
- Author
-
Gillam, Sandra Laing, Vaughn, Sharon, Roberts, Greg, Capin, Philip, Fall, Anna-Maria, Israelsen-Augenstein, Megan, Holbrook, Sarai, Wada, Rebekah, Hancock, Allison, Fox, Carly, Dille, Jordan, Magimairaj, Beula M., and Gillam, Ronald B.
- Abstract
Narration has been shown to be a foundational skill for literacy development in school-age children. Elementary teachers routinely conduct classroom lessons that focus on reading decoding and comprehension, but they rarely provide instruction in oral narration (Hall et al., 2021). This multisite randomized controlled trial was designed to rigorously evaluate the efficacy of the "Supporting Knowledge of Language and Literacy" ("SKILL") intervention program for improving oral narrative comprehension and production. Three hundred fifty-seven students who were at-risk for language and literacy difficulties in Grades 1-4 in 13 schools across seven school districts were randomly assigned to the "SKILL" treatment condition or a business as usual (BAU) control condition. "SKILL" was provided to small groups of two to four students in 36 thirty-minute lessons across a 3-month period. Multilevel modeling with students nested within teachers and teachers nested within schools revealed students who received the "SKILL" treatment significantly outperformed students in the BAU condition on measures of oral narrative comprehension and production immediately after treatment. Oral narrative production for the "SKILL" treatment group remained significantly more advanced at follow-up testing conducted 5 months after intervention ended. Improvements in oral narration generalized to a measure of written narration at posttest and the treatment advantage was maintained at follow-up. Grade level did not moderate effects for oral narration, but it did for reading comprehension, with a higher impact for students in grades 3 and 4. [This is the online version of an article published in "Journal of Educational Psychology."]
- Published
- 2022
- Full Text
- View/download PDF
12. What Works Clearinghouse: Procedures and Standards Handbook, Version 5.0. WWC 2022008
- Author
-
National Center for Education Evaluation and Regional Assistance (NCEE) (ED/IES), What Works Clearinghouse (WWC) and American Institutes for Research (AIR)
- Abstract
Education decisionmakers need access to the best evidence about the effectiveness of education interventions, including practices, products, programs, and policies. It can be difficult, time consuming, and costly to access and draw conclusions from relevant studies about the effectiveness of interventions. The What Works Clearinghouse (WWC) addresses the need for credible, succinct information by identifying existing research in education, assessing the quality of this research, and summarizing and disseminating the evidence from studies that meet WWC standards. This "WWC Procedures and Standards Handbook, Version 5.0," provides a detailed description of how the WWC reviews studies that meet eligibility requirements for a WWC review. Version 5.0 of the "Handbook" replaces the two documents used since October 2020, the "What Works Clearinghouse Procedures Handbook, Version 4.1" (ED602035) and the "What Works Clearinghouse Standards Handbook, Version 4.1" (ED602036). "WWC Procedures and Standards Handbook, Version 5.0" is organized such that most frequently used information appears in earlier chapters. The need for technical knowledge of research design increases in subsequent chapters. Chapter I provides a general overview of WWC procedures and standards. The overview is intended for readers who need a working knowledge of how the WWC reviews studies but who will not conduct study reviews or design studies intended to meet WWC standards. Chapter II describes procedures for screening studies for eligibility. Chapter III describes procedures and standards for reviewing findings from randomized controlled trials and quasi-experimental designs. Chapter IV describes procedures and standards for reviewing findings from studies that use a regression discontinuity design. Chapter V focuses on reviewing findings from group design studies that use advanced methodological procedures, such as randomized controlled trials that estimate complier average causal effects, and analyses that impute missing data. Chapter VI describes procedures and standards for reviewing findings from single-case design studies. Finally, Chapter VII describes procedures for synthesizing and characterizing findings from reviews of individual studies and in intervention reports and practice guides. The "Handbook" concludes with technical appendices. These appendices provide details on the procedures underlying the review process; for example, the calculation and estimation of effect sizes and other computations used in WWC reviews. In addition, the technical appendices include information on procedures underlying the development of WWC products, such as how the WWC identifies studies to include in intervention reports and practice guides.
- Published
- 2022
13. Stakeholders' Experiences of Ethical Challenges in Cluster Randomized Trials in a Limited Resource Setting: A Qualitative Analysis
- Author
-
Tiwonge K. Mtande, Carl Lombard, Gonasagrie Nair, and Stuart Rennie
- Abstract
Although the use of the cluster randomized trial (CRT) design to evaluate vaccines, public health interventions or health systems is increasing, the ethical issues posed by the design are not adequately addressed, especially in low- and middle-income country settings (LMICs). To help reveal ethical challenges, qualitative interviews were conducted with key stakeholders experienced in designing and conducting two selected CRTs in Malawi. The 18 interviewed stakeholders included investigators, clinicians, nurses, data management personnel and community workers who were invited to share their experiences related to implementation of CRTs. Data analysis revealed five major themes with ethical implications: (1) The moral obligation for health care providers to participate in health research and its compensation; (2) Suboptimal care services compromising the integrity of CRT; (3) Ensuring scientific validity and withholding care service; (4) Obtaining valid consent and permission for waiver of consent; and (5) Inadequate risk assessment for trial participation. Understanding key ethical issues posed by CRTs in Malawi could improve ethical review and research oversight of this particular study design.
- Published
- 2024
- Full Text
- View/download PDF
14. Exploring Common Trends in Online Educational Experiments
- Author
-
Prihar, Ethan, Syed, Manaal, Ostrow, Korinn, Shaw, Stacy, Sales, Adam, and Heffernan, Neil
- Abstract
As online learning platforms become more ubiquitous throughout various curricula, there is a growing need to evaluate the effectiveness of these platforms and the different methods used to structure online education and tutoring. Towards this endeavor, some platforms have performed randomized controlled experiments to compare different user experiences, curriculum structures, and tutoring strategies in order to ensure the effectiveness of their platform and personalize the education of the students using it. These experiments are typically analyzed on an individual basis in order to reveal insights on a specific aspect of students' online educational experience. In this work, the data from 50,752 instances of 30,408 students participating in 50 different experiments conducted at scale within the online learning platform ASSISTments were aggregated and analyzed for consistent trends across experiments. By combining common experimental conditions and normalizing the dependent measures between experiments, this work has identified multiple statistically significant insights on the impact of various skill mastery requirements, strategies for personalization, and methods for tutoring in an online setting. This work can help direct further experimentation and inform the design and improvement of new and existing online learning platforms. The anonymized data compiled for this work are hosted by the Open Science Foundation and can be found at https://osf.io/59shv/. [This paper was published in: "Proceedings of the 15th International Conference on Educational Data Mining," edited by A. Mitrovic and N. Bosch, International Educational Data Mining Society, 2022, pp. 27-38.]
- Published
- 2022
- Full Text
- View/download PDF
15. A Cluster Randomized Pilot Trial of the Equity-Explicit Establish-Maintain-Restore Program among High School Teachers and Students
- Author
-
Duong, Mylien T., Gaias, Larissa M., Brown, Eric, Kiche, Sharon, Nguyen, Lillian, Corbin, Catherine M., Chandler, Cassandra J., Buntain-Ricklefs, Joanne J., and Cook, Clayton R.
- Abstract
Student-teacher relationships are important to student outcomes and may be especially pivotal at the high school transition and for minoritized racial/ethnic groups. Although interventions exist to improve student-teacher relationships, none have been shown to be effective among high school students or in narrowing racial/ethnic disparities in student outcomes. This study was conducted to examine the effects of an equity-explicit student-teacher relationship intervention (Equity-Explicit Establish Maintain Restore, or E-EMR) for high school teachers and students. A cluster-randomized pilot trial was conducted with 94 ninth grade teachers and 417 ninth grade students in six high schools. Teachers in three schools were randomized to receive E-EMR training and follow-up supports for one year. Teachers in three control schools conducted business as usual. Student-teacher relationships, sense of school belonging, academic motivation, and academic engagement were collected via student self-report in September and January of their ninth-grade year. Longitudinal models revealed non-significant main effects of E-EMR. However, there were targeted benefits for students who started with low scores at baseline, for Asian, Latinx, multicultural, and (to a lesser extent) Black students. We also found some unexpected effects, where high-performing and/or advantaged groups in the E-EMR condition had less favorable outcomes at post, compared to those in the control group, which may be a result of the equity-explicit focus of E-EMR. Implications and directions for future research are discussed. [This is the online first version of an article published in "School Mental Health."]
- Published
- 2022
- Full Text
- View/download PDF
16. Using Iterative Experimentation to Accelerate Program Improvement: A Case Example
- Author
-
Maynard, Rebecca A., Baelen, Rebecca N., Fein, David, and Souvanna, Phomdaen
- Abstract
This article offers a case example of how experimental evaluation methods can be coupled with principles of design-based implementation research (DBIR), improvement science (IS), and rapid-cycle evaluation (RCE) methods to provide relatively quick, low-cost, credible assessments of strategies designed to improve programs, policies, or practices. This article demonstrates the feasibility and benefits of blending DBIR, IS, and RCE practices with embedded randomized controlled trials (RCTs) to improve the pace and efficiency of program improvement. The illustrative case is a two-cycle experimental test of staff-designed strategies for improving a workforce development program. Youth enrolled in Year Up's Professional Training Corps (PTC) programs were randomly assigned to "improvement strategies" designed to boost academic success and persistence through the 6-month learning and development (L&D) phase of the program, when participants spend most of their program-related time in courses offered by partner colleges. The study sample includes 317 youth from three PTC program sites. The primary outcome measures are completion of the program's L&D phase and continued college enrollment beyond the L&D phase. The improvement strategies designed and tested during the study increased program retention through L&D by nearly 10 percentage points and increased college persistence following L&D by 13 percentage points. Blending DBIR, IS, and RCE principles with a multi-cycle RCT generated highly credible estimates of the efficacy of the tested improvement strategies within a relatively short period of time (18 months) at modest cost and with reportedly low burden for program staff.
- Published
- 2022
- Full Text
- View/download PDF
17. Evidence Summary: Coronavirus (COVID-19) and the Use of Face Coverings in Education Settings
- Author
-
Department for Education (DfE) (United Kingdom)
- Abstract
At every stage since the start of the pandemic, decisions across education and childcare have been informed by the scientific and medical evidence -- both on the risks of coronavirus (COVID-19) infection, transmission and illness, and on the known risks to children and young people not attending education settings -- balancing public health and education considerations. This report sets out the evidence informing the Government's decision to revisit the guidance on the use of face coverings within secondary schools and colleges in England -- temporarily extending their recommended use in communal areas to also include classrooms and teaching spaces for those in year 7 and above. In making this decision, the Government has balanced education and public health considerations, including the benefits in managing infection and transmission, against any educational and wider health and wellbeing impacts from the recommended use of face coverings. Evidence shows that face coverings can contribute to reducing transmission of COVID19 primarily by reducing the emission of virus-carrying particles when worn by an infected person. The government will continue to evaluate the data relating to all COVID-19 measures, including in education settings.
- Published
- 2022
18. Review of EEF Projects. Summary of Key Findings
- Author
-
Education Endowment Foundation (EEF) (United Kingdom), Sheffield Hallam University (United Kingdom), Sheffield Institute of Education (SIoE), Demack, Sean, Maxwell, Bronwen, Coldwell, Mike, Stevens, Anna, Wolstenholme, Claire, Reaney-Wood, Sarah, Stiell, Bernadette, and Lortie-Forgues, Hugues
- Abstract
This document summarises key findings from the quantitative strands of a review of the Education Endowment Foundation (EEF) evaluations that had reported from the establishment of EEF in 2011 up to January 2019. The quantitative strands summarised include meta-analyses of effect sizes reported for attainment outcomes and descriptive analyses of cost-effectiveness and attrition. Additionally, descriptive univariate analyses of the explanatory variables used in the review are summarised. Complete findings can be found in the main report. The review conducted qualitative interviews to explore perceptions on effective scale-up and developed and piloted an IPE quality measure, and these are reported separately. [For "Review of EEF Projects. Evaluation Report," see ED625480. For "Review of EEF Projects. Technical Annex," see ED625484. For "Review: Scale-Up of EEF Efficacy Trials to Effectiveness Trials," see ED625479. For "Review: EEF Implementation and Process Evaluation (IPE) Quality Pilot," see ED625478.]
- Published
- 2021
19. Review of EEF Projects. Evaluation Report
- Author
-
Education Endowment Foundation (EEF) (United Kingdom), Sheffield Hallam University (United Kingdom), Sheffield Institute of Education (SIoE), Demack, Sean, Maxwell, Bronwen, Coldwell, Mike, Stevens, Anna, Wolstenholme, Claire, Reaney-Wood, Sarah, Stiell, Bernadette, and Lortie-Forgues, Hugues
- Abstract
This report presents findings from exploratory, descriptive meta-analyses of effect sizes reported by the first 82 EEF evaluations that used a randomised controlled trial (RCT) or clustered RCT impact evaluation design published up to January 2019. The review used a theoretical framework derived from literature with five overarching themes to group explanatory variables: (1) Intervention; (2) Implementation & fidelity; (3) Theory & evidence; (4) Context; and (5) Evaluation design. Meta-analyses of effect sizes reported for intention-to-treat (ITT) analyses of primary attainment outcomes, ITT analyses of secondary attainment outcomes and free school meals (FSM) subsample analyses of primary or secondary attainment outcomes are reported. Effect sizes reported for psychological outcomes were also examined but not included in the meta-analyses because of the distinct and diverse nature of these outcomes. Also presented are findings from trial/evaluation-level descriptive analyses of cost effectiveness of interventions and overall pupil-level attrition. [For "Review of EEF Projects. Summary of Key Findings," see ED625481. For "Review of EEF Projects. Technical Annex," see ED625484. For "Review: Scale-Up of EEF Efficacy Trials to Effectiveness Trials," see ED625479. For "Review: EEF Implementation and Process Evaluation (IPE) Quality Pilot," see ED625478.]
- Published
- 2021
20. Improving Academic Success and Retention of Participants in Year Up's Professional Training Corps. Actionable Evidence Initiative Case Study
- Author
-
Britt, Jessica, Fein, David, Maynard, Rebecca, and Warfield, Garrett
- Abstract
This case study describes a small randomized controlled trial (RCT) comparing alternative strategies for monitoring and supporting academic achievement in Year Up's Professional Training Corps (PTC) program. Year Up is a nonprofit organization dedicated to preparing economically disadvantaged young adults for well-paying jobs with advancement potential. As part of the study, Year Up and its research partners designed and tested strategies for more quickly identifying and supporting participants struggling with their college coursework. To ensure the credibility of the study findings, the team randomly assigned participants either to a coach who would use the new monitoring and support strategies or to a coach who would follow existing practices, which did not place much emphasis on academics. Over two cycles of testing, the study found strong evidence that the alternative strategies tested substantially improved success in courses and, thus, advancement to internships. Three factors contributed to the resulting evidence being actionable: (1) it focused on low- to no-cost, field-initiated practice changes that could be implemented widely; (2) site staff could tailor strategies to local needs and opportunities; and (3) the research team encouraged local staff to modify the strategies being tested between enrollment cohorts. [This case study was commissioned by the Actionable Evidence Initiative and led by Project Evident. Funding from the GreenLight Fund also supported this work.]
- Published
- 2021
21. Insights on Variance Estimation for Blocked and Matched Pairs Designs
- Author
-
Pashley, Nicole E. and Miratrix, Luke W.
- Abstract
Evaluating blocked randomized experiments from a potential outcomes perspective has two primary branches of work. The first focuses on larger blocks, with multiple treatment and control units in each block. The second focuses on matched pairs, with a single treatment and control unit in each block. These literatures not only provide different estimators for the standard errors of the estimated average impact, but they are also built on different sets of assumptions. Neither literature handles cases with blocks of varying size that contain singleton treatment or control units, a case which can occur in a variety of contexts, such as with different forms of matching or poststratification. In this article, we reconcile the literatures by carefully examining the performance of variance estimators under several different frameworks. We then use these insights to derive novel variance estimators for experiments containing blocks of different sizes. [For the corresponding grantee submission, see ED599248.]
- Published
- 2021
- Full Text
- View/download PDF
22. Individual Participant Data Meta-Analysis of the Impact of EEF Trials on the Educational Attainment of Pupils on Free School Meals: 2011-2019
- Author
-
Education Endowment Foundation (EEF) (United Kingdom), Ashraf, Bilal, Singh, Akansha, Uwimpuhwe, Germaine, Coolen-Maturi, Tahani, Einbeck, Jochen, Higgins, Steve, and Kasim, Adetayo
- Abstract
This study investigates the impact of Education Endowment Foundation (EEF)-funded trials on pupils eligible for free school meals. Although similar analysis is conducted during each individual evaluation, this report conducts a meta-analysis using data from 88 trials and over half a million pupils to reach conclusions. The report contributes to the evidence about what type of approaches may be effective at reducing the attainment gap, which influences the choice of interventions that EEF chooses to trial and scale up. This study investigates the immediate impact of EEF-funded interventions on attainment in literacy and mathematics based on the following research questions: (1) Do EEF trials improve literacy/mathematics attainment of pupils eligible for FSM? (2) What broad types of interventions are more beneficial for mathematics and literacy outcomes of FSM pupils? and (3) Do FSM pupils improve their literacy/mathematics attainment more or less from EEF-funded interventions compared to their non-FSM peers? [This report was written in partnership with the Durham Research Methods Centre (DRMC).]
- Published
- 2021
23. Toward a System of Evidence for All: Current Practices and Future Opportunities in 37 Randomized Trials
- Author
-
Tipton, Elizabeth, Spybrook, Jessaca, Fitzgerald, Kaitlyn G., Wang, Qian, and Davidson, Caryn
- Abstract
As a result of the evidence-based decision-making movement, the number of randomized trials evaluating educational programs and curricula has increased dramatically over the past 20 years. Policy makers and practitioners are encouraged to use the results of these trials to inform their decision making in schools and school districts. At the same time, however, little is known about the schools taking part in these randomized trials, both regarding how and why they were recruited and how they compare to populations in need of research. In this article, we report on a study of 37 cluster randomized trials funded by the Institute of Education Sciences between 2011 and 2015. Principal investigators of these grants were interviewed regarding the recruitment process and practices. Additionally, data on the schools included in 34 of these studies were analyzed to determine the general demographics of schools included in funded research, as well as how these samples compare to important policy relevant populations. We show that the types of schools included in research differ in a variety of ways from these populations. Large schools from large school districts in urban areas were overrepresented, whereas schools from small school districts in rural areas and towns are underrepresented. The article concludes with a discussion of how recruitment practices might be improved in order to meet the goals of the evidence-based decision-making movement.
- Published
- 2021
- Full Text
- View/download PDF
24. Working out What Works: The Case of the Education Endowment Foundation in England
- Author
-
Edovald, Triin and Nevill, Camilla
- Abstract
Purpose: This article gives an overview of the successes and lessons learned to date of the Education Endowment Foundation (EEF), one of the leading organizations of the What Works movement. Design/Approach/Methods: Starting with its history, this article covers salient components of the EEF's unique journey including lessons learned and challenges in evidence generation. Findings: The EEF has demonstrated that it is feasible to rapidly expand the use of school-based randomized controlled trials (RCTs) in a country context, set high standards for research independence, transparency, and design, and generate new evidence on what works. Challenges include the need to consider alternative designs to RCTs to answer a range of practice-relevant questions, how to best test interventions at scale, and how study findings are reported and interpreted. Originality/Value: This article addresses some of the key components required for the success of What Works organizations globally.
- Published
- 2021
- Full Text
- View/download PDF
25. What Happens after the Program Ends? A Synthesis of Post-Program Effects in Higher Education. Issue Focus
- Author
-
MDRC, Weiss, Michael J., Unterman, Rebecca, and Biedzio, Dorota
- Abstract
Some education programs' early positive effects disappear over time. Other programs have unanticipated positive long-term effects. Foundations warn of the dangers of putting too much weight on in-program effects, which, they say, often fade after a program ends. This Issue Focus tackles the topic of post-program effects in postsecondary education. Are in-program effects--that is, the effects observed while the program was active--maintained once the program ends? Do they grow and improve? Or do they fade out? This investigation capitalizes on two decades of rigorous program evaluations conducted by MDRC, including approximately 25 postsecondary programs, to consider those questions. The Higher Education Randomized Controlled Trials (THE RCT) draw from a database of student-level data from 31 MDRC projects, sampling 67,400 students. The results of THE RCT are striking: During the year after these programs ended, effects on academic progress were consistently maintained. There is no evidence of discernible fade-out, which provides encouraging information about the lasting value of many postsecondary programs and the value of evidence collected from them. There was also no evidence of post-program growth, perhaps prompting the need to develop programs that equip students for success over the long term.
- Published
- 2021
26. An On-Ramp to Student Success: A Randomized Controlled Trial Evaluation of a Developmental Education Reform at the City University of New York
- Author
-
MDRC, Weiss, Michael J., Scrivener, Susan, Slaughter, Austin, and Cohen, Benjamin
- Abstract
Most community college students are referred to developmental education courses to build basic skills. These students often struggle in these courses and college more broadly. CUNY Start is a prematriculation program for students assessed as having significant remedial needs. CUNY Start students delay matriculation for one semester and receive time-intensive instruction in math, reading, and writing with a prescribed pedagogy delivered by trained teachers. The program aims to help students complete remediation and prepare for college-level courses. This article describes the results of an experiment at four community colleges (n [is approximately equal to] 3,800). We estimate that over three years, including one semester that students spent in the program and two-and-a-half years after the program was complete, CUNY Start substantially increased college readiness, slightly increased credit accumulation, and modestly increased graduation rates (by increasing participation in CUNY's highly effective ASAP).
- Published
- 2021
27. Power Analysis for Moderator Effects in Longitudinal Cluster Randomized Designs
- Author
-
Li, Wei and Konstantopoulos, Spyros
- Abstract
Cluster randomized control trials often incorporate a longitudinal component where, for example, students are followed over time and student outcomes are measured repeatedly. Besides examining how intervention effects induce changes in outcomes, researchers are sometimes also interested in exploring whether intervention effects on outcomes are modified by moderator variables at the individual (e.g., gender, race/ethnicity) and/or the cluster level (e.g., school urbanicity) over time. This study provides methods for statistical power analysis of moderator effects in two- and three-level longitudinal cluster randomized designs. Power computations take into account clustering effects, the number of measurement occasions, the impact of sample sizes at different levels, covariates effects, and the variance of the moderator variable. Illustrative examples are offered to demonstrate the applicability of the methods.
- Published
- 2023
- Full Text
- View/download PDF
28. A Random Controlled Trial to Examine the Efficacy of Blank Slate: A Novel Spaced Retrieval Tool with Real-Time Learning Analytics
- Author
-
McHugh, Douglas, Feinn, Richard, McIlvenna, Jeff, and Trevithick, Matt
- Abstract
Learner-centered coaching and feedback are relevant to various educational contexts. Spaced retrieval enhances long-term knowledge retention. We examined the efficacy of Blank Slate, a novel spaced retrieval software application, to promote learning and prevent forgetting, while gathering and analyzing data in the background about learners' performance. A total of 93 students from 6 universities in the United States were assigned randomly to control, sequential or algorithm conditions. Participants watched a video on the Republic of Georgia before taking a 60 multiple-choice-question assessment. Sequential (non-spaced retrieval) and algorithm (spaced retrieval) groups had access to Blank Slate and 60 digital cards. The algorithm group reviewed subsets of cards daily based on previous individual performance. The sequential group reviewed all 60 cards daily. All 93 participants were re-assessed 4 weeks later. Sequential and algorithm groups were significantly different from the control group but not from each other with regard to after and delta scores. Blank Slate prevented anticipated forgetting; authentic learning improvement and retention happened instead, with spaced retrieval incurring one-third of the time investment experienced by non-spaced retrieval. Embedded analytics allowed for real-time monitoring of learning progress that could form the basis of helpful feedback to learners for self-directed learning and educators for coaching.
- Published
- 2021
29. Embedding Causal Research Designs in Pre-K Systems at Scale
- Author
-
Abenavoli, Rachel, Rojas, Natalia, Unterman, Rebecca, Cappella, Elise, Wallack, Josh, and Morris, Pamela
- Abstract
In this article, Rachel Abenavoli, Natalia Rojas, Rebecca Unterman, Elise Cappella, Josh Wallack, and Pamela Morris argue that research-practice partnerships make it possible to rigorously study relevant policy questions in ways that would otherwise be infeasible. Randomized controlled trials of small-scale programs have shown us that early childhood interventions can yield sizable benefits. But when we move from relatively small, tightly controlled studies to scaled-up initiatives, the results are often disappointing. Here the authors describe how their partnership with New York City's Department of Education, as the city rapidly rolled out its universal pre-K initiative, gave them opportunities to collect experimental and quasi-experimental evidence while placing a minimal burden on educators. They argue that this type of research can answer the most pressing ECE questions, which are less about whether ECE can make a difference and more about the conditions under which early interventions are effective at scale. They offer three recommendations for researchers, policy makers, and practitioners who are considering partnership work: build a foundation of trust and openness; carefully consider whether rigorous causal research or descriptive research is the right choice in a given situation; and be flexible, seeking opportunities for rigorous research designs that may already be embedded in early childhood education systems.
- Published
- 2021
30. Developing a Fidelity Measure of Early Intervention Programs for Children with Neuromotor Disorders
- Author
-
An, Mihee, Nord, Jayden, Koziol, Natalie A., Dusing, Stacey C., Kane, Audrey E., Lobo, Michele A., McCoy, Sarah W., and Harbourne, Regina T.
- Abstract
Aim: To describe the development of an intervention-specific fidelity measure and its utilization and to determine whether the newly developed Sitting Together and Reaching to Play (START-Play) intervention was implemented as intended. Also, to quantify differences between START-Play and usual early intervention (uEI) services. Method: A fidelity measure for the START-Play intervention was developed for children with neuromotor disorders by: (1) identifying key intervention components; (2) establishing a measurement coding system; and (3) testing the reliability of instrument scores. After establishing acceptable interrater reliability, 103 intervention videos from the START-Play randomized controlled trial were coded and compared between the START-Play and uEI groups to measure five dimensions of START-Play fidelity, including adherence, dosage, quality of intervention, participant responsiveness, and program differentiation. Results: Fifteen fidelity variables out of 17 had good to excellent interrater reliability evidence with intraclass correlation coefficients (ICCs) ranging from 0.77 to 0.95. The START-Play therapists met the criteria for acceptable fidelity of the intervention (rates of START-Play key component use [greater than or equal to]0.8; quality ratings [greater than or equal to]3 [on a scale of 1-4]). The START-Play and uEI groups differed significantly in rates of START-Play key component use and quality ratings. Interpretation: The START-Play fidelity measure successfully quantified key components of the START-Play intervention, serving to differentiate START-Play from uEI.
- Published
- 2021
- Full Text
- View/download PDF
31. Review of EEF Projects. Technical Annex
- Author
-
Education Endowment Foundation (EEF) (United Kingdom), Sheffield Hallam University (United Kingdom), Sheffield Institute of Education (SIoE), Demack, Sean, Maxwell, Bronwen, Coldwell, Mike, Stevens, Anna, Wolstenholme, Claire, Reaney-Wood, Sarah, Stiell, Bernadette, and Lortie-Forgues, Hugues
- Abstract
This document provides supplementary statistical tables to support the review of Education Endowment Foundation (EEF) evaluations. This includes: (1) Descriptive (univariate) tables for all explanatory variables; (2) Tables for the meta-analyses of primary ITT effect sizes; (3) Tables for the meta-analyses of secondary ITT effect sizes; (4) Tables for the meta-analyses of FSM subsample primary / secondary effect sizes; (5) Tables for the analyses of cost effectiveness; and (6) Tables for the analyses of pupil-level attrition. The analyses of pupil-level attrition identified a clear (and expected) association with the type of primary ITT outcome that was used in a trial (i.e., commercial tests tended to have higher attrition compared with official/NPD outcomes). For this reason, a limited follow-on elaboration analysis was undertaken for the attrition analyses. Specifically, analyses were undertaken separately for trials that used a commercial and trials that used an official/NPD outcome. This resulted in additional tables: (7) Tables for the elaboration analyses of pupil-level attrition. [For "Review of EEF Projects. Evaluation Report," see ED625480. For "Review of EEF Projects. Summary of Key Findings," see ED625481.]
- Published
- 2021
32. Statistical Power for Randomized Controlled Trials with Clusters of Varying Size
- Author
-
Kush, Joseph M., Konold, Timothy R., and Bradshaw, Catherine P.
- Abstract
Power in multilevel models remains an area of interest to both methodologists and substantive researchers. In two-level designs, the total sample is a function of both the number of level-2 (e.g., schools) clusters and the average number of level-1 (e.g., classrooms) units per cluster. Traditional multilevel power calculations rely on either the arithmetic average or the harmonic mean when estimating the average number of level-1 units across clusters of unbalanced size. The current study evaluates and contrasts these two approaches with simulation-based power estimates in two-group two-level cluster randomized controlled trial designs with unbalanced cluster sizes. Results from the Monte Carlo study demonstrated the largest differences in simulated versus the two forms of calculated power occurred in study designs with large variability in the number of level-1 units sampled. Overall, power was less sensitive to the level-2 sample size or the effect size, regardless of the imbalance in cluster size. Implications of these findings for the design of cluster randomized trials are discussed. [This paper will be published in "The Journal of Experimental Education."]
- Published
- 2021
- Full Text
- View/download PDF
33. An RCT of a CBT Intervention for Emerging Adults with ADHD Attending College: Functional Outcomes
- Author
-
Eddy, Laura D., Anastopoulos, Arthur D., Dvorsky, Melissa R., Silvia, Paul J., Labban, Jeffrey D., and Langberg, Joshua M.
- Abstract
Objective: The current study reports functional outcomes from a multi-site randomized trial of a cognitive-behavioral treatment program for college students diagnosed with ADHD. Methods: A sample of emerging adults (N = 250; ages 18 to 30) currently attending college were comprehensively evaluated and diagnosed with ADHD (M age = 19.7; 66% female, 6.8% Latino, 66.3% Caucasian). Participants were randomized to either a two-semester intervention (Accessing Campus Connections and Empowering Student Success (ACCESS)) or a delayed treatment condition. Participants were assessed with measures of academic, daily life, and relationship functioning prior to treatment, at the end of the first semester, and after the second semester of treatment. Results: Multi-group latent growth curve models revealed moderate effect size improvements on self-report measures of study skills and strategies, as well as on self-report measures of time management, daily functioning, and overall well-being for participants in ACCESS. Importantly, treatment effects were maintained or increased in some cases from the end of the first semester to the end of the second semester. Improvements in self-reported interpersonal functioning were not significantly different across condition and neither condition demonstrated significant change over time in educational record outcomes (GPA and number of credits earned). Conclusions: ACCESS appears to promote improvements in self-reported general well-being and functioning, time management, and study skills and strategies. However, improvements in interpersonal relationships and objective academic outcomes such as GPA were not observed. Clinical implications and future directions for treating ADHD on university and college campuses are discussed. [This is the online version of an article published in "Journal of Clinical Child & Adolescent Psychology."]
- Published
- 2021
- Full Text
- View/download PDF
34. A Randomized Controlled Trial Examining CBT for College Students with ADHD
- Author
-
Anastopoulos, Arthur D., Langberg, Joshua M., Eddy, Laura D., Silvia, Paul J., and Labban, Jeffrey D.
- Abstract
Objective: College students with attention deficit/hyperactivity disorder (ADHD) are at increased risk for numerous educational and psychosocial difficulties. This study reports findings from a large, multisite randomized controlled trial examining the efficacy of a treatment for this population, known as ACCESS -- Accessing Campus Connections and Empowering Student Success. Method: ACCESS is a cognitive--behavioral therapy program delivered via group treatment and individual mentoring across two semesters. A total of 250 students (18-30 years of age, 66% female, 6.8% Latino, 66.3% Caucasian) with rigorously defined ADHD and comorbidity status were recruited from two public universities and randomly assigned to receive ACCESS immediately or on a 1-year delayed basis. Treatment response was assessed on three occasions, addressing primary (i.e., ADHD, executive functioning, depression, anxiety) and secondary (i.e., clinical change mechanisms, service utilization) outcomes. Results: Latent growth curve modeling (LGCM) revealed significantly greater improvements among immediate ACCESS participants in terms of ADHD symptoms, executive functioning, clinical change mechanisms, and use of disability accommodations, representing medium to large effects (Cohen's d, 0.39 - 1.21). Across these same outcomes, clinical significance analyses using reliable change indices (RCI; Jacobson & Truax, 1992) revealed significantly higher percentages of ACCESS participants showing improvement. Although treatment-induced improvements in depression and anxiety were not evident from LGCM, RCI analyses indicated that immediate ACCESS participants were less likely to report a worsening in depression/anxiety symptoms. Conclusions: Findings from this RCT provide strong evidence in support of the efficacy and feasibility of ACCESS as a treatment for young adults with ADHD attending college.
- Published
- 2021
- Full Text
- View/download PDF
35. A Randomized Controlled Trial of MTSS-B in High Schools: Improving Classroom Management to Prevent EBDs
- Author
-
Bradshaw, Catherine P., Pas, Elise T., Debnam, Katrina J., and Johnson, Sarah Lindstrom
- Abstract
This study presents findings from a 58 high-school group-randomized controlled trial testing the effectiveness of training in a multitiered system of supports for behavior (MTSS-B) framework, which was leveraged to reduce students' risk for emotional and behavior disorders. The trial tested the impact of MTSS-B, which included (a) training in the broader MTSS-B framework that went beyond the existing Tier 1 (school-wide PBIS) training offered by the state; (b) project-provided coaching and technical assistance supports; and (c) integration and training in evidence-based behavioral or social-emotional programs at Tiers 2 and/or 3. We reported effects of MTSS-B on implementation of positive behavior supports across all three tiers using the Schoolwide Evaluation Tool (SET) and Individual Student Systems Evaluation Tool (ISSET), as well as on external observations of teachers' use of classroom management strategies. Results indicated significant effects on multiple SET subscales and significant reductions in teachers' use of reactive behavior management.
- Published
- 2021
- Full Text
- View/download PDF
36. Learning from Cluster Randomized Trials in Education: An Assessment of the Capacity of Studies to Determine 'What Works,' 'For Whom,' and 'Under What Conditions'
- Author
-
Spybrook, Jessaca, Zhang, Qi, Kelcey, Ben, and Dong, Nianbo
- Abstract
Over the past 15 years, we have seen an increase in the use of cluster randomized trials (CRTs) to test the efficacy of educational interventions. These studies are often designed with the goal of determining whether a program works, or answering the what works question. Recently, the goals of these studies expanded to include for whom and under what conditions an intervention is effective. In this study, we examine the capacity of a set of CRTs to provide rigorous evidence about for whom and under what conditions an intervention is effective. The findings suggest that studies are more likely to be designed with the capacity to detect potentially meaningful individual-level moderator effects, for example, gender, than cluster-level moderator effects, for example, school type.
- Published
- 2020
- Full Text
- View/download PDF
37. Identification and Counterfactuals for Program Evaluation of Career and Technical Education
- Author
-
Career and Technical Education (CTE) Research Network at American Institutes for Research (AIR), Ross, Stephen L., Brunner, Eric, and Rosen, Rachel
- Abstract
This paper considers recent efforts to conduct experimental and quasi-experimental evaluations of career and technical education programs. It focuses on understanding the counterfactual, or control population, for these program evaluations, discussing how the educational experiences of the control population might vary from those of the treated population and the ways in which the treatment and control populations used for evaluation may differ from each other. The paper begins by discussing the key identification strategies and the associated assumptions used to identify program effects, including regression and propensity score matching, instrumental variables, regression discontinuity designs, randomized controlled trials, and lottery-based admissions. It then presents a series of case studies evaluating the counterfactual for specific studies that used each identification strategy.
- Published
- 2020
38. An Eight-Year Cost Analysis from a Randomized Controlled Trial of CUNY's Accelerated Study in Associate Programs. Working Paper
- Author
-
MDRC, Azurdia, Gilda, and Galkin, Katerina
- Abstract
Developed by the City University of New York (CUNY), the Accelerated Study in Associate Programs (ASAP) is a comprehensive program that provides students with up to three years of financial and academic support and other services. In return, students are expected to enroll in classes full time and participate in essential program services. An earlier experimental evaluation found that ASAP nearly doubled graduation rates three years after students entered the study. This paper summarizes the effects on education and financial aid outcomes over an eight-year follow-up period, or five years after the end of program services. This paper also presents the educational investment in students associated with providing these services and any educational investment in students from the CUNY community colleges after the program services were no longer provided. Through the eight-year follow-up period, ASAP's effects on associate's degree receipt persisted, indicating that the program not only helped students graduate faster, but also helped some students who would have never graduated without the program earn a degree. ASAP also led to a significant increase in the amount of financial aid students received through federal and state grants. The net cost (the difference between the total program group costs and the total control group costs) averaged $13,838 per program group member over eight years. CUNY spent, on average, an additional $9,162 per degree earned for students offered ASAP, compared with the control group students over the eight-year follow-up period. The findings also show that the program resulted in increased financial aid from the Pell Grant Program and the New York State Tuition Assistance Program, and reduced the grant dollars these programs invested per degree earned.
- Published
- 2020
39. The Role of Context in Educational RCT Findings: A Call to Redefine 'Evidence-Based Practice'
- Author
-
Kaplan, Avi, Cromley, Jennifer, Perez, Tony, Dai, Ting, Mara, Kyle, and Balsai, Michael
- Abstract
In this commentary, we complement other constructive critiques of educational randomized control trials (RCTs) by calling attention to the commonly ignored role of context in causal mechanisms undergirding educational phenomena. We argue that evidence for the central role of context in causal mechanisms challenges the assumption that RCT findings can be uncritically generalized across settings. Anchoring our argument with an example from our own multistudy RCT project, we argue that the scientific pursuit of causal explanation should involve the rich description of contextualized causal effects. We further call for incorporating the "evidence" of the integral role of context in causal mechanisms into the meaning of "evidence-based practice," with the implication that effective implementation of practice in a new setting must involve context-oriented, evidence-focused, design-based research that attends to the emergent, complex, and dynamic nature of educational contexts. [For the Grantee Submission, see ED605239.]
- Published
- 2020
- Full Text
- View/download PDF
40. Moderators of School Intervention Outcomes for Children with Autism Spectrum Disorder
- Author
-
Lopata, Christopher, Donnelly, James P., Thomeer, Marcus L., Rodgers, Jonathan D., Lodi-Smith, Jennifer, Booth, Adam J., and Volker, Martin A.
- Abstract
A prior cluster randomized controlled trial (RCT) compared outcomes for a comprehensive school intervention (schoolMAX) to typical educational programming (services-as-usual [SAU]) for 103 children with autism spectrum disorder (ASD) without intellectual disability. The schoolMAX intervention was superior to SAU in improving social-cognitive understanding (emotion-recognition), social/social-communication skills, and ASD-related impairment (symptoms). In the current study, a range of demographic, clinical, and school variables were tested as potential moderators of treatment outcomes from the prior RCT. Moderation effects were not evident in demographics, child IQ, language, or ASD diagnostic symptoms, or school SES. Baseline externalizing symptoms moderated the outcome of social-cognitive understanding and adaptive skills moderated the outcome of ASD-related symptoms (no other comorbid symptoms or adaptive skills ratings moderated outcomes on the three measures). Overall, findings suggest that the main effects of treatment were, with two exceptions, unaffected by third variables. [This paper was published in "Journal of Abnormal Child Psychology" v48 p1105-1114 May 2020.]
- Published
- 2020
41. Limitless Regression Discontinuity
- Author
-
Sales, Adam C. and Hansen, Ben B.
- Abstract
Conventionally, regression discontinuity analysis contrasts a univariate regression's limits as its independent variable, "R," approaches a cut point, "c," from either side. Alternative methods target the average treatment effect in a small region around "c," at the cost of an assumption that treatment assignment, I[R
- Published
- 2020
- Full Text
- View/download PDF
42. Comparing Ed Reforms: Assessing the Experimental Research on Nine K-12 Education Reforms. EdChoice Brief
- Author
-
EdChoice and DiPerna, Paul
- Abstract
Policymakers, experts and advocates have promoted many different types of education reform over the past few decades, but what is the evidence about the efficacy of these programs? EdChoice partnered with Hanover Research to find out what research has been conducted in nine major education reform areas focusing on outcomes related to student achievement or education attainment. The best methodology available to researchers for generating "apples-to-apples" comparisons is a randomized control trial (RCT), which researchers also refer to as random assignment studies or experimental studies. For this summary only RCTs were reviewed--not studies using other methods--because experiments comparing treatment and control groups allow researchers to identify reform/policy/program effects while minimizing bias from unobservable factors. The brief shares the assessment of the RCT studies only examining two types of outcomes, either participating students' achievement or measures of educational attainment. [Reviews conducted by Hanover Research Editing and coding by EdChoice.]
- Published
- 2020
43. A Randomized Controlled Trial of a Trauma-Informed School Prevention Program for Urban Youth: Rationale, Design, and Methods
- Author
-
Mendelson, Tamar, Clary, Laura K., Sibinga, Erica, Tandon, Darius, Musci, Rashelle, Mmari, Kristin, Salkever, David, Stuart, Elizabeth, and Ialongo, Nick
- Abstract
Introduction: Youth in disadvantaged urban areas are frequently exposed to chronic stress and trauma, including housing instability, neighborhood violence, and other poverty-related adversities. These exposures increase risk for emotional, behavioral, and academic problems and ultimately, school dropout. Schools are a promising setting in which to address these issues; however, there are few universal, trauma-informed school-based interventions for urban youth. Methods/Design: Project POWER (Promoting Options for Wellness and Emotion Regulation) is a randomized controlled trial testing the impact of RAP Club, a trauma-informed intervention for 8th graders that includes mindfulness as a core component. Students in 32 urban public schools (n = 800) are randomly assigned to either RAP Club or a health education active control group. We assess student emotional, behavioral, and academic outcomes using self-report surveys and teacher ratings at baseline, post-intervention, and 4-month follow up. Focus groups and interviews with students, teachers, and principals address program feasibility, acceptability, and fidelity, as well as perceived program impacts. Students complete an additional self-report survey in 9th grade. Schools provide students' academic and disciplinary data for their 7th, 8th, and 9th grade years. In addition, data on program costs are collected to conduct an economic analysis of the intervention and active control programs. Discussion: Notable study features include program co-leadership by young adults from the community and building capacity of school personnel for continued program delivery. In addition to testing program impact, we will identify factors related to successful program implementation to inform future program use and dissemination. [This paper was published in "Contemporary Clinical Trials" Mar 2020.]
- Published
- 2020
- Full Text
- View/download PDF
44. Using Data from Randomized Trials to Assess the Likely Generalizability of Educational Treatment-Effect Estimates from Regression Discontinuity Designs
- Author
-
Bloom, Howard, Bell, Andrew, and Reiman, Kayla
- Abstract
This article assesses the likely generalizability of educational treatment-effect estimates from regression discontinuity designs (RDDs) when treatment assignment is based on academic pretest scores. Our assessment uses data on outcome and pretest measures from six educational experiments, ranging from preschool through high school, to estimate RDD generalization bias. We then compare those estimates (reported as standardized effect sizes) with the What Works Clearinghouse (WWC) standard for acceptable bias size ([less than or equal] to 0.05[delta]) for two target populations, one spanning a half-standard deviation pretest-score range and another spanning a full-standard deviation pretest-score range. Our results meet this standard for all 18 study/outcome/pretest scenarios examined given the narrower target population, and for 15 scenarios given the broader target population. Fortunately, two of the three exceptions represent pronounced "ceiling effects" that can be identified empirically, making it possible to avoid unwarranted RDD generalizations, and the third exception is very close to the WWC standard. [This is the online version of an article published in "Journal of Research on Educational Effectiveness" (ISSN 1934-5747).]
- Published
- 2020
- Full Text
- View/download PDF
45. Randomized Trial Testing the Integration of the Good Behavior Game and MyTeachingPartner™: The Moderating Role of Distress among New Teachers on Student Outcomes
- Author
-
Patrick Tolan, Lauren Molloy Elreda, Catherine P. Bradshaw, Jason T. Downer, and Nicholas Ialongo
- Abstract
A growing body of research documents the effectiveness of classroom management programs on a range of student outcomes, yet few early-career teachers receive training on these practices prior to entering the classroom. Moreover, few studies have attended to how variations in teacher distress or level of classroom misbehavior affects training benefits. This study reports findings from a randomized trial of a teacher training program that combined two evidence-based programs (Good Behavior Game [GBG] and MyTeachingPartnerTM [MTP]) to determine their impact on novice teachers and their students. In addition, the current study reports findings on moderated impacts by initial teacher distress as well as the overall classroom level of misbehavior. The sample included 188 early-career teachers (grades K-3) in their first three years of teaching from three large, urban school districts. Analyses indicated that intervention had no main effects, but yielded moderated impact depending on the combination of the baseline levels of classroom disruptive behavior and teacher distress; it appears that the program impacts were greatest in the highest risk circumstance (i.e., high teacher stress and elevated challenging student behaviors). For those classrooms, those assigned to intervention evidence improved behavior and student achievement compared to control counterparts by the spring of the training year, relative to the fall baseline (d = 0.18-0.70 depending on outcome). This study is significant in that it highlights effects during a critical window of training and coaching for early career teachers and the need to consider teacher and classroom contextual factors that may moderate professional development efforts. [This paper was published in "Journal of School Psychology" v78 p75-95 2020.]
- Published
- 2020
- Full Text
- View/download PDF
46. Examining Design and Statistical Power for Planning Cluster Randomized Trials Aimed at Improving Student Science Achievement and Science Teacher Outcomes
- Author
-
Zhang, Qi, Spybrook, Jessaca, and Unlu, Fatih
- Abstract
With the increasing demand for evidence-based research on teacher effectiveness and improving student achievement, more impact studies are being conducted to examine the effectiveness of professional development (PD) interventions. Cluster randomized trials (CRTs) are often carried out to assess PD interventions that aim to improve both teacher and student outcomes. Due to the different design parameters (i.e., intraclass correlation and R[superscript 2]) and benchmark effect sizes associated with the student and teacher outcomes, two power analyses are necessary for planning CRTs that aim to detect both teacher and student effects in one study. These two power analyses are often conducted separately without considering how design choices to power the study to detect student effects may affect design choices to power the study to detect teacher effects and vice versa. In this study, we consider strategies to maximize the efficiency of the study design when both student and teacher effects are of primary interest.
- Published
- 2020
47. The Placebo Effect in Education? Evidence-Based Educational Practice and the Psychoanalytic Concept of Transference
- Author
-
Hyldgaard, Kirsten
- Abstract
The concept of evidence, the demand for evidence-based practice and decision-making has for years dominated educational research. Concomitantly, the randomized controlled trials (RCT) that originally gained ground within the field of medicine have become the gold standard for empirical research and political reform within educational sciences, at least in Scandinavia. Nevertheless, proponents of such an approach have not fully explored the consequences of evidence-based methods, in particular the double-blind, placebo-controlled clinical trial. What is the equivalent to the placebo-effect in education and how does research in educational sciences deal with this effect? The concept of placebo is arguably as close as medicine gets to acknowledging the psychoanalytic concept of the unconscious and unconscious knowledge. With the concept of transference, psychoanalysis offers a useful exploration of the processes and mechanisms leading to the placebo effects.
- Published
- 2020
48. Heterogeneity in Mathematics Intervention Effects: Evidence from a Meta-Analysis of 191 Randomized Experiments
- Author
-
Society for Research on Educational Effectiveness (SREE), Citkowicz, Martyna, Lindsay, Jim, Miller, David, and Williams, Ryan
- Abstract
Many public and private initiatives have focused on improving prekindergarten (PK) through Grade 12 mathematics education, given that the mathematics content learned before college is foundational for later learning and college success across STEM subjects. Although decades of educational research have studied PK-12 mathematics interventions, the field lacks a comprehensive understanding of which interventions work, for whom, and under what conditions. This meta-analytic project aimed to examine the heterogeneity in mathematics intervention effects by synthesizing 25 years of randomized experiments of interventions designed to improve mathematics achievement. It describes intersections of interventions, intervention components, and outcomes that appear especially promising for future intervention research. [SREE documents are structured abstracts of SREE conference symposium, panel, and paper or poster submissions.]
- Published
- 2020
49. Efficacy Validation of the Revised First Step Program: A Randomized Controlled Trial
- Author
-
Feil, Edward G., Walker, Hill M., Frey, Andy J., Seeley, John R., Small, Jason W., Golly, Annemieke, Lee, Jon, and Forness, Steven R.
- Abstract
Disruptive behavior problems frequently emerge in the preschool years and are associated with numerous, long-term negative outcomes, including comorbid disorders. First Step is a psychosocial early intervention with substantial empirical evidence supporting its efficacy among young children (Walker et al., 2014). The present study reports on a validation study of the revised and updated First Step early intervention, called First Step Next (Walker, Stiller et al. 2015), conducted within four preschool settings. One hundred sixty students at risk for school failure, and their teachers, were randomized to intervention and control conditions. Results indicated coach and teacher adherence to implementing the core components of the program was excellent. Teachers and parents had high satisfaction ratings. For the three First Step Next pro-social domains, Hedges' g effect sizes ranged from 0.34 to 0.91. For the problem behavior domain, children who received the First Step Next intervention had significant reductions in teacher and parent-reported problem behavior as compared to children randomized to the control condition. For the problem behavior domain, Hedges' g effect sizes ranged from 0.33 to 0.63, again favoring the intervention condition. All of the domains were statistically significant. This study builds on the evidence base supporting the First Step intervention in preschool settings (Feil et al., 2014; 2016; Frey et al., 2015). [This paper will be published in "Exceptional Children."]
- Published
- 2020
- Full Text
- View/download PDF
50. Getting It Right: Using Implementation Research to Improve Outcomes in Early Care and Education
- Author
-
Foundation for Child Development and Foundation for Child Development
- Abstract
As the number of publicly funded early childhood education (ECE) programs increases, policymakers will need empirical evidence to justify the taxpayer investment. Such justification will require a stronger understanding of the essential components of an ECE program's design, as well as solid evidence on which components, or constellations of components, are most effective in achieving strong outcomes for specific subgroups of children. Expectations for child outcomes must be based on the realities of the program components, the target populations, and the financial and human resources that support program implementation. The need for more robust quantitative and qualitative data to ensure stronger outcomes for all young children and significantly narrow the opportunity and achievement gap for minoritized children and those living in poverty. Believing in magic will not produce strong outcomes (Brooks-Gunn, 2003). Overpromising and underdelivering will have catastrophic results for the children and families who might benefit most from ECE initiatives. This volume asserts that randomized controlled trials (RCTs) could be greatly enhanced by the findings from rigorous empirical data that provide contextual information about the participants, the settings, and the overall conditions under which the treatment is conducted. Throughout this volume, this type of analysis is referred to broadly as implementation research. However, the intent is not to provide a single definition of implementation research. Rather, the hope is to initiate a conversation that is centered on what else needs to be explored about how ECE programs are operationalized and what shape the research might take.
- Published
- 2020
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.