1. How Big Is That? Reporting the Effect Size and Cost of ASSISTments in the Maine Homework Efficacy Study
- Author
-
SRI Education, Worcester Polytechnic Institute (WPI), University of Maine, Center for Research and Evaluation, Roschelle, Jeremy, Murphy, Robert, Feng, Mingyu, and Bakia, Marianne
- Abstract
In a rigorous evaluation of ASSISTments as an online homework support conducted in the state of Maine, SRI International reported that "the intervention significantly increased student scores on an end-of-the-year standardized mathematics assessment as compared with a control group that continued with existing homework practices." (Roschelle, Feng, Murphy & Mason, 2016). Naturally, education stakeholders want to know how big the improvement was.To answer this type of question, researchers report an effect size as a simple way of quantifying the difference between two groups. We reported an effect size of g = 0.18 of a standard deviation (t(20) = 2.992, p = 0.007) based on a two level hierarchical linear model (Roschelle et al., 2016). An effect size is calculated by dividing the difference in scores between the two groups by the pooled standard deviation (Hedges, 1981). The underlying idea is that the strength of an effect depends both on the magnitude of the score difference and on how much the scores vary naturally. Consider this analogy to a commute: If it takes exactly 25 minutes to get to work every day, then a reduction to 22 minutes might mean a lot. Yet if the commute time varies between 10 minutes and 60 minutes, a reduction of the average time of 25 minutes to 22 minutes might not feel like much. Roschelle and colleagues (2016) also reported an improvement index corresponding to the effect size: "Students at the 50th percentile without the intervention would improve to the 58th percentile if they received the ASSISTments treatment." An improvement index is the expected percentile gain for the average student in the control group--the student who scored at the 50th percentile on the outcome measure--if that student had attended a school where the intervention was implemented. Reporting the effect size or an improvement index does not appear to answer educators'questions completely, however. To an educator, the implications of whether such numbers are high or low may not be unclear. In this technical report, we present alternatives for explaining the effect size, building on the guidance of Lipsey et al. (2012), a leading researcher who developed broad recommendations for effect size reporting. First, we provide additional detail on how we calculated the effect size and highlight the range of values that might be considered valid for this study. Second, we give comparisons with conventional benchmarks, a strategy that Lipsey and colleagues criticized but that still bears reporting. Third, we offer comparisons based on the recommendations of Lipsey et al. The report closes with a discussion of the challenges of interpreting effect sizes. The sidebar at the end of the report provides sample statements that educators may use to describe the study.
- Published
- 2017