172 results on '"Mattern, Krista"'
Search Results
2. A Case Study: ACT Section Retest Scores and Superscores Are Predictive of First-Term Grades. Technical Brief
- Author
-
ACT, Inc., Radunzel, Justine, and Mattern, Krista
- Abstract
This study conducted in collaboration with a postsecondary institution highlights results from a concurrent validity study of administering ACT® section tests to their entering freshmen who previously took the ACT test in high school. Students' ACT scores obtained from section retesting were found to be as predictive of first-term grade point average (GPA) as scores obtained via traditional ACT testing. Additionally, ACT Superscores that were computed across test administrations that included single-subject section test events were found to be predictive of first-term GPA, alone and in combination with high school GPA. Moreover, the strength of this relationship did not significantly differ from that based on students' most recent ACT Composite scores.
- Published
- 2020
3. Predicting the Impact of COVID-19 School Closures on ACT Test Scores: Methods and Considerations for States and Districts. Issue Brief
- Author
-
ACT, Inc., Allen, Jeff, Mattern, Krista, and Camara, Wayne
- Abstract
This brief presents a method for predicting the impact of school closures on average ACT® test scores. The purpose is to illustrate a methodology and major conditions that should be considered in predicting aggregate ACT scores when school closes prematurely and learning and testing are disrupted. Across three scenarios examined herein, the impact on academic achievement was predicted to be between -0.71 and -0.31 Composite score points. The predictions are based on a number of assumptions and may not be applicable in every case. ACT can assist state and district partners in applying and interpreting this or similar models to predict the effect of COVID-19 on learning and performance within their local context.
- Published
- 2020
4. The Impact of Superscoring on the Distribution of ACT Scores. Issue Brief
- Author
-
ACT, Inc., Cruce, Ty, and Mattern, Krista
- Abstract
This brief describes that publicly-available ACT Superscore Database (or Tableau dashboard). The Database reports nationally and for each state the distribution of students across the full ACT Composite score scale based on three scoring methods: most recent score, highest score from a single test attempt, and ACT Superscore based on the highest subject test scores across all test attempts.
- Published
- 2020
5. Section Retesting: Do Students Perform as Expected? Technical Brief
- Author
-
ACT, Inc., Radunzel, Justine, and Mattern, Krista
- Abstract
Beginning in September 2020, students will have the option to retake one or more sections of the ACT® test (referred to as section retesting, modular testing, or single-subject retesting), instead of needing to take the entire battery again. Section retests will only be available to students who have previously completed the full battery and only available to students retesting online. The section retest option is being made available to students because research conducted to date indicates that ACT scores combined across multiple administrations are valid; this includes results from the current study which suggest that students' performance when retesting in a single ACT subject area tends to be consistent with expected performance estimated from standard retesting with the full battery. The focus of the current study was to examine student performance on single-subject retests. More specifically, the study objective was to examine whether section retesting results in larger score gains as compared to traditional retesting (taking the entire battery). That is, this study evaluated whether allowing students to take one subject test at a time resulted in students performing better than what is typically seen among students testing with the full battery, taking into account students' prior ACT scores and other testing characteristics. The study also examined how performance on the single-subject retest relates to subsequent performance on the full-battery ACT to evaluate whether any improved performance is associated with true learning gains. Methodology and findings from the study are discussed in detail in this report.
- Published
- 2020
6. Initial Efficacy Evidence for the ACT Certified Educator Program. Technical Brief
- Author
-
ACT, Inc., Radunzel, Justine, Mattern, Krista, and Schiel, Jeff
- Abstract
ACT recently launched the ACT Certified Educator program to assist educators with enhancing their teaching strategies, and to help students improve their academic achievement. This study summarizes participants' reactions to the program.
- Published
- 2019
7. ACT's Efficacy Framework: The Intersection of Learning, Measurement, and Navigation. Issue Brief
- Author
-
ACT, Inc. and Mattern, Krista
- Abstract
A great deal has been written on the topic of test validity. Guiding our work at ACT are "The Standards for Educational and Psychological Testing" (2014), which outlines best practices in test development and validation. As ACT transitions from an assessment company to a learning, measurement, and navigation organization, a framework for our learning products is also needed to guide research activities to ensure that the hallmarks of rigor, science, and scrutiny of our measurement solutions are also built into our learning products. The field of education and education measurement has devoted much less attention to the development of efficacy arguments and frameworks as compared to validity arguments and frameworks. More importantly, the integration of validity theory and efficacy theory into an overarching Efficacy Framework is needed to ensure that we optimize the conditions to stand up an evaluation system with appropriate feedback loops so that we can appropriately assess whether our solutions are designed to have the greatest impact on learner outcomes. This point is exceedingly critical because it lies at the heart of ACT's mission. If we cannot evaluate whether our products are efficacious, then how do we know if we are achieving our mission of helping individuals achieve educational and workplace success? The Efficacy Framework presented here is an attempt to guide research activities at ACT in support of our mission.
- Published
- 2019
8. Validity Considerations for 10th-Grade ACT State and District Testing. Insights in Education and Work
- Author
-
ACT, Inc., Allen, Jeff M., and Mattern, Krista
- Abstract
States and districts have expressed interest in administering the ACT® to 10th-grade students. Given that the ACT was designed to be administered in the spring of 11th grade or fall of 12th grade, the appropriateness of this use should be evaluated. As such, the focus of this paper is to summarize empirical evidence evaluating the use of the ACT as a measure of college readiness for 10th graders. In alignment with a Standards for Educational and Psychological Testing, empirical evidence related to five sources of validity evidence (response processes, internal structure, content, relation to other variables, and consequences) are summarized. As compared to 11th-grade test administrations, the results indicate that when the ACT is administered to all 10th graders: (1) Students are similarly motivated (response process); (2) Scores are only slightly less reliable (internal structure); and (3) ACT scores and test completion rates are predictably lower for 10th graders relative to 11th graders. Additionally, the results indicate that: (1) The content of the test is aligned to college readiness standards based on what students learn in high school and need to know to succeed in college (content); (2) How students perform on the ACT in 10th grade is comparable to other 10th-grade measures (i.e., PreACT®) of college readiness; and (3) ACT scores from 10th grade are predictive of high school grades and 11th grade ACT scores (relation to other variables). Unlike other 10th grade tests, the ACT provides a college-reportable score and greater test security (relative to the PreACT test). Collectively, the evidence supports the use of the ACT as a measure of college readiness or academic preparation level for 10th-grade students.
- Published
- 2019
9. Inflection Point: The Role of Testing in Admissions Decisions in a Postpandemic Environment
- Author
-
Camara, Wayne J. and Mattern, Krista
- Abstract
In 2020, the onset of COVID-19 greatly restricted access to admissions testing in higher education and required innovative solutions and flexibility such as at home testing with remote proctoring, reducing testing time, pop-up locations, and additional testing dates. Increased focus on social justice, diversity, and fairness continued to concern admissions professionals during this time. This article is intended to provide an update (Camara) on admissions testing as we enter 2022, documenting enhancements and changes across testing programs. In addition, we report recent data and findings on applications, enrollment, and test taking, as well as the prevalence of test optional and test blind policies and its impact on score sending in undergraduate, graduate, business, law, and medical schools. It is important to note that many questions remain unanswered. Our original intent was to include more information on the impact of test-optional polices on diversity as we thought the pandemic would be in the rear-view mirror by now. Given the lingering effects of the pandemic, it will be critical to evaluate the impact of these policies changes on the entering class of 2021 and beyond as those data become available.
- Published
- 2022
- Full Text
- View/download PDF
10. The Use of CollegeReady to Improve Course Performance in English without the Need for Formal Remediation: A Case Study at Chattanooga State Community College. Research Report 2018-6
- Author
-
ACT, Inc., Cruce, Ty M., Lowe, Judy, and Mattern, Krista
- Abstract
Many students graduate from high school under-prepared for college-level coursework, leading to a large number of students entering college who require remediation in English and mathematics. This study looked at the effectiveness of using EdReady--now offered through ACT as CollegeReady™--as a system for delivering remediation prior to college to improve students' course performance. We found that students who elected to skill-up with the product to the point where they avoided remedial coursework actually out-performed their peers with regard to their pass rate and course grades in their first college-level English composition course, providing promising support for the program. Future research should employ experimental and/or quasi-experimental designs to isolate the causal effect of CollegeReady on college success.
- Published
- 2018
11. Implementing CollegeReady™ to Promote Students' Preparation for College-Level Math: Jacksonville State University Case Study. ACT Research & Policy
- Author
-
ACT, Inc., Westrick, Paul, and Mattern, Krista
- Abstract
Almost two-thirds of students entering a community college and a third of students entering a 4-year college lack basic math and writing skills, and they often find themselves placed in developmental or remedial courses in their first year of college. Unfortunately, students placed into remedial math and English courses often have poorer educational outcomes; their retention and degree completion rates lag behind those of the students who enter college ready for college-level work. Colleges and universities have recognized this problem, and many are taking steps to help students improve their academic preparation with the goal of reducing the need for remedial course-taking. In particular, many colleges, including Jacksonville State University (JSU), have implemented EdReady -- now offered through ACT as CollegeReady -- for this very purpose. Unlike traditional placement tests, CollegeReady is a low-stakes placement system. Students can log on to the system at any time from any location and work at their own pace. If their initial CollegeReady score falls below the institution's target score, students can view study options and follow a personalized learning path to fill gaps in knowledge and skills. Using this approach, many students raise their scores and avoid remediation. In partnership with JSU, ACT researchers examined the relationship between incoming students' initial and most recent EdReady math scores with course placement decisions and math course outcomes. Preliminary findings from this study suggest that CollegeReady can help students bolster their math preparation and be successful going directly into college-level courses.
- Published
- 2018
12. An Empirically-Derived Index of High School Academic Rigor. ACT Working Paper 2017-5
- Author
-
ACT, Inc., Allen, Jeff, Ndum, Edwin, and Mattern, Krista
- Abstract
We derived an index of high school academic rigor by optimizing the prediction of first-year college GPA based on high school courses taken, grades, and indicators of advanced coursework. Using a large data set (n~108,000) and nominal parameterization of high school course outcomes, the high school academic rigor (HSAR) index capitalizes on differential contributions across courses and nonlinear relationships between course grades and first-year college GPA (FYGPA). Test scores from 8th grade were incorporated in the model to isolate the effect of HSAR. High school courses with the largest contributions to FYGPA were English 11, English 10, Chemistry, and Algebra 2. Participation in AP, accelerated, or honors courses increased HSAR. The correlation of the HSAR index and FYGPA was 0.50 and 0.49 in two cross-validation samples. While the HSAR index was the strongest predictor of FYGPA, it only led to a modest improvement in overall prediction when combined with high school GPA (HSGPA) and ACT Composite score. The predictive strength of the HSAR index was consistent across different types of high schools and colleges, and subgroup differences in the HSAR index were smaller than subgroup differences in ACT Composite score. Implications for high school counselors, researchers, and postsecondary student service personnel are discussed.
- Published
- 2017
13. How Should Colleges Treat Multiple Admissions Test Scores? ACT Working Paper 2017-4
- Author
-
ACT, Inc., Mattern, Krista, Radunzel, Justine, Bertling, Maria, and Ho, Andrew
- Abstract
The percentage of students retaking college admissions tests is rising (Harmston & Crouse, 2016). Researchers and college admissions offices currently use a variety of methods for summarizing these multiple scores. Testing companies, interested in validity evidence like correlations with college first-year grade-point averages (FYGPA), often use the most recent test score available (Allen, 2013; Mattern & Patterson, 2014). In contrast, institutions report using a variety of composite scoring methods for applicants with multiple test records, including averaging and taking the maximum subtest score across test occasions ("superscoring"). We compare four scoring methods (average, highest, last, and superscoring) on two criteria. First, we compare correlations between scores from each scoring method and FYGPA. We find them similar (?? ˜ 0.40). Second, we compare scores from each scoring method based on whether they differentially predict FYGPA across the number of testing occasions (retakes). We find that retakes account for additional variance beyond standardized achievement and positively predict FYGPA across all scoring methods. We also find that superscoring minimizes this differential prediction--although it may seem that superscoring should inflate scores across retakes, this inflation is "true" to the extent that it accounts for the positive effects of retaking for predicting FYGPA. Future research should identity what factors, such as academic motivation and socioeconomic status, are related to retesting and consider how these should be considered in college admissions.
- Published
- 2017
14. Improving the Validity and Diversity of A College Admissions Selection System
- Author
-
Mattern, Krista, primary and Walton, Kate, additional
- Published
- 2022
- Full Text
- View/download PDF
15. Who Will Declare a STEM Major? The Role of Achievement and Interests. ACT Research Report Series 2017-4
- Author
-
ACT, Inc., Radunzel, Justine, Mattern, Krista, and Westrick, Paul
- Abstract
As new initiatives and programs are being increasingly implemented to promote STEM (Science, Technology, Engineering, and Mathematics) interest and participation among U.S. students, the percentage of students who declare a STEM-related major in college continues to lag behind what would be expected based on students' intentions. Such findings underscore the value of understanding why students who are interested in STEM do not pursue a STEM degree. In order to answer that question, the current study developed a multidimensional model of STEM major choice based on academic and non-academic student characteristics. The identification factors to include in the model was based on previous literature findings supporting psychological theories related to major selection. Namely, we focused on the theory of planned behavior and person-environment fit models. The findings support these theories with students' achievement levels, their high school coursework and grades, their major intentions, the certainty of their major intentions, and having measured interest in STEM being related significantly to STEM major choice. The results can help inform initiatives to identify students most likely to enter the STEM pipeline and provide resources and support for this career pathway. Two appendices provide supplemental tables and additional details regarding the calculations of the adjusted odds rations (ORs).
- Published
- 2017
16. More Information, More Informed Decisions: Why Test-Optional Policies Do 'Not' Benefit Institutions or Students. ACT Insights in Education & Work
- Author
-
ACT, Inc., Mattern, Krista, and Allen, Jeff
- Abstract
In this research report, we review commonly held beliefs about test-optional policies and practices. Focusing solely on empirical evidence, we highlight research findings that directly address the state intentions and actual outcomes of such practices. Throughout the paper, we raise concerns with test-optional policies as they pertain both to institutions as well as the students they serve. We conclude with recommendations that colleges and universities employ holistic models of education readiness and success, supported by the notion that more information about students is better than less. [For the Technical Brief to this report, "More Information, More Informed Decisions: Why Test-Optional Policies Do "Not" Benefit Institutions or Students. Technical Brief," see ED573718.]
- Published
- 2016
17. The Role of Academic Preparation and Interest on STEM Success. ACT Research Report Series
- Author
-
ACT, Inc., Radunzel, Justine, Mattern, Krista, and Westrick, Paul
- Abstract
Research has shown that science, technology, engineering, and mathematics (STEM) majors who are more academically prepared--especially in terms of their mathematics and science test scores--are more likely to be successful across a variety of outcomes: cumulative grade point average (GPA), persistence in a STEM major, and ultimately earning a STEM degree. Research also shows, however, that many highly prepared STEM majors do not end up earning a STEM degree; likewise, some less academically prepared STEM majors persist and graduate with a STEM degree. These findings are consistent with a growing understanding that educational success is a product of a variety of cognitive and noncognitive factors. This study sought to identify student characteristics that, in addition to test scores, can be used to identify STEM majors who are likely to persist and ultimately complete a STEM degree. The study examined the relationship between students' chances of long-term success in college and their academic preparation and achievement, their expressed and measured interests in STEM, and their demographic characteristics. Data on background characteristics, academic readiness for college, career-related interests, and college outcomes were obtained for nearly 76,000 STEM majors who enrolled as first-time entering students in fall 2005 through 2009 at 85 two- and four-year institutions. Academic readiness indicators included ACT® test scores, high school coursework, and grades earned. Students' interests in STEM fields were measured using their ACT Interest Inventory scores and their expressed major preference. Outcomes included annual cumulative GPA, persistence in a STEM-related field, and degree completion within six years. Student outcomes were tracked for at least four years and, where possible, across in-state institutions. Hierarchical regression models accounting for institution attended were used to estimate students' chances of succeeding in a STEM major. Results were evaluated by type of institution and STEM major category (Science; Computer Science & Mathematics; Medical & Health; and Engineering & Technology). As expected, students who were better prepared in mathematics and science, as measured by achieving higher ACT scores, taking higher-level high school coursework, and earning higher HSGPAs in these subject areas, were more likely than those who were less prepared to earn a cumulative college GPA of 3.0 or higher, to persist in a STEM major through year 4, and to complete a STEM degree in four, five, or six years. Moreover, after statistically controlling for academic preparation and demographic characteristics, students with both expressed and measured interest in STEM were more likely to persist and complete a STEM degree than those with either expressed or measured interest only, as well as those with no interest in STEM. These findings were observed for each of the STEM major categories, though college success rates differed somewhat among STEM major categories. Additionally, gender and racial/ethnic differences in STEM persistence and STEM degree completion rates depended on STEM major category and type of institution. These findings highlight the importance of helping students to have realistic expectations about the rigorous mathematics and science course requirements in STEM-related fields and to select a major that is aligned well with their academic skills and interests. Strong academic preparation for STEM fields needs to take place long before students enroll in college. Educators, advisors, and counselors can assist students in these areas by providing students with meaningful educational and career guidance that encourages them to explore personally relevant career options based on their own skills, interests, and aspirations.
- Published
- 2016
18. The Importance of Graduating from High School College and Career Ready: The Positive Relationship between ACT Score and Future Earnings. ACT Research. Data Byte
- Author
-
ACT, Inc., Mattern, Krista D., and Cruce, Ty M.
- Abstract
Previous research has linked higher levels of academic preparation as measured by the ACT® to postsecondary success, such as earning higher college grades and completing a college degree in a timely manner. This ACT Data Byte extends these findings, illustrating the positive relationship between graduating from high school college and career ready and future annual earnings. Data from a collaboration between ACT and Opportunity Insights--a non-profit organization located at Harvard University--show that the annual earnings individuals' report during their late 20s are positively related to their academic achievement level measured during high school.
- Published
- 2021
19. Preparing Students for College and Careers
- Author
-
McClarty, Katie Larsen, Mattern, Krista D., and Gaertner, Matthew N.
- Subjects
Academic expectations ,Assessment ,Brent Duckor ,Career Readiness ,Carolyn Landel ,Chrissy Tillery ,Christopher F. Chabris ,College Readiness ,college placement ,David T. Conley ,Educational Measurement ,Elisabeth Barnett ,educational practice ,educational research ,evidence-based standards ,Francesca Fraga Leahy ,Frank C. Worrell ,formative assessment ,Gaertner ,intervention ,James W. Pellegrino ,Jeff M. Allen ,Joann L. Moore ,Jonathan Wai ,Kathryn M. Kroeper ,Katie Larsen McClarty ,Krista D. Mattern ,Larsen-McClarty ,Margaret Heritage ,Mary C. Murphy ,Education: examinations and assessment ,Psychological testing and measurement ,Educational psychology - Abstract
Preparing Students for College and Careers addresses measurement and research issues related to college and career readiness. Educational reform efforts across the United States have increasingly taken aim at measuring and improving postsecondary readiness. These initiatives include developing new content standards, redesigning assessments and performance levels, legislating new developmental education policy for colleges and universities, and highlighting gaps between graduates’ skills and employers’ needs. In this comprehensive book, scholarship from leading experts on each of these topics is collected for assessment professionals and for education researchers interested in this new area of focus. Cross-disciplinary chapters cover the current state of research, best practices, leading interventions, and a variety of measurement concepts, including construct definitions, assessments, performance levels, score interpretations, and test uses.
- Published
- 2022
- Full Text
- View/download PDF
20. Developing a Validity Argument for Social and Emotional Learning Assessments
- Author
-
Mattern, Krista, primary and Walton, Kate E., additional
- Published
- 2022
- Full Text
- View/download PDF
21. Beyond Academics: A Holistic Framework for Enhancing Education and Workplace Success. ACT Research Report Series. 2015 (4)
- Author
-
ACT, Inc., Camara, Wayne, O'Connor, Ryan, Mattern, Krista, and Hanson, Mary Ann
- Abstract
Colleges have long recognized the importance of multiple domains. Admissions officers look to high school grades as indicators of persistence and achievement; student statements and letters of recommendation as indicators of character, behavior, and adaptability; the rigor of courses completed in high school as evidence of effort, motivation, and challenge; and activities and extracurricular involvement as indicators of leadership, teamwork, and collaboration. Research summarized in this report and an earlier report (Mattern et al., 2014) calls attention to the research basis for examining multiple domains and the importance of nonacademic domains for predicting outcomes such as retention, persistence, and engagement in college as well as graduation from college. These reports also summarize similar findings for employment, where employers use a wide range of practices to make inferences about individuals' likely adaptation, persistence, and contribution to the job, organization, and society. Most know of academically talented students who did not persist in college and highly skilled workers who failed in their jobs. Building on research conducted at ACT over the last fifty years, this report describes the development of a holistic framework that can provide a more complete description of education and work readiness. The framework is organized into four broad domains: core academic skills, cross-cutting capabilities, behavioral skills, and education and career navigation skills. To take full advantage of emerging knowledge in this area, development of this framework is based on a comprehensive review of relevant theory, education and work standards, empirical research, input from experts in the field, and a variety of other sources for each of the four broad domains. The report also begins to build an integrated view of education and work readiness, acknowledging that constructs across the four broad domains are not independent, that their combined effects provide a more holistic understanding, and that different constructs are often more or less important for different outcomes associated with education and work success. To illustrate the multidimensional nature of readiness for education and workplace success, examples are provided that focus on two key transitions: the transition from high school to college and the transition from college to work. For each of these two transitions, a holistic model of success, specifying factors from each of the broad domains that are important for success is provided. Similar models can and should be developed for different outcomes, since the same constructs are not equally important across all outcomes. The hope is that the reader will take away a few central findings and ideas from this report and other research conducted by ACT on college and career readiness. The table of contents provides the following: (1) ACT Holistic Framework of Education and Work Readiness (Krista D. Mattern, Mary Ann Hanson); (2) Core Academic Skills (Ryan O'Connor, James Gambrell, and Robert Pulvermacher); (3) Cross-Cutting Capabilities (Ryan O'Connor, James Gambrell, and Robert Pulvermacher); (4) Behavioral Skills (Alex Casillas, Jason Way, and Jeremy Burrus); (5) Education and Career Navigation (Becky Bobek and Ran Zhao); and (6) Toward an Integrated Framework of Education and Work Readiness (Jeremy Burrus and Krista Mattern). Domain-Specific Framework Development Methodology is contained in the appendix.
- Published
- 2015
22. Who Goes to Graduate School? Tracking 2003 ACT®-Tested High School Graduates for More than a Decade. ACT Research Report Series, 2015 (2)
- Author
-
ACT, Inc., Mattern, Krista, and Radunzel, Justine
- Abstract
Many students who earn a bachelor's degree also aspire to earn a graduate degree. In this study, we examined student and institutional characteristics that are related to graduate school enrollment. Student characteristics included demographic characteristics; high school performance measures, coursework taken, and extracurricular activities; college intentions and educational plans; and undergraduate enrollment and degree measures. Institution-level characteristics included college control, college selectivity, and Historically Black College or University (HBCU) designation. The sample for this study consisted of more than 14,000 ACT-tested students who graduated from high school in 2003, who enrolled in college, and who earned a bachelor's degree within eight years of initial enrollment. Nearly one-half (46%) of the bachelor's degree recipients subsequently enrolled in a graduate program. Graduate enrollment rates varied significantly by student and institutional characteristics. Higher graduate enrollment rates were observed for students who were more academically prepared upon high school graduation (as measured by ACT test scores, high school coursework taken, and grades earned), those who had intentions of taking advanced college coursework and graduate school aspirations, and those who earned a bachelor's degree in four years or less. Females were more likely than males to enroll in a graduate program (50% vs. 40%). Among all racial/ethnic groups, African American students had the highest graduate enrollment rate (55%); likewise, HBCU students had higher graduate enrollment rates than non-HBCU students. Graduate enrollment rates were also found to vary by undergraduate major. For example, business majors had one of the lower graduate enrollment rates (31%), whereas biological and biomedical science majors had one of the higher rates (68%). Controlling for multiple variables simultaneously, study results indicated that gender, race/ethnicity, ACT Composite score, graduate school aspirations, earning a bachelor's degree in a timely manner, and graduating from a HBCU institution were strongly related to graduate school enrollment. Other variables that were positively related to graduate school enrollment, but to a lesser extent, included: taking advanced, accelerated, or honors courses in high school, receiving a leadership award in high school, planning to take an independent study course while in college, intending to receive college credit by exam, and planning not to work while in college. The results of this study demonstrate the importance of sound academic preparation for college so that students are well equipped to achieve their educational goals; many students' goals include earning a post-baccalaureate degree. Enrolling in graduate school has potential benefits to the individual student such as greater self-esteem and long-term earning potential. This is particularly noteworthy in light of our findings that African American students and students attending HBCUs were more likely to enroll in graduate school, holding all else constant. Interventions aimed at promoting post-baccalaureate pursuits for these populations could potentially help reduce the economic disparities that exist by race/ethnicity. Equipping students to achieve such goals also helps the United States build a more highly skilled workforce and preserve the nation's global competitiveness. An appendix contains additional tables.
- Published
- 2015
23. Development of STEM Readiness Benchmarks to Assist Educational and Career Decision Making. ACT Research Report Series, 2015 (3)
- Author
-
ACT, Inc., Mattern, Krista, Radunzel, Justine, and Westrick, Paul
- Abstract
Although about 40% of high school graduates who take the ACT® test express interest in pursuing a career in a science, technology, engineering, and mathematics (STEM) field, the percentage of first-year students in college who declare a STEM major is substantially lower. The pool of prospective STEM workers shrinks further as the majority of STEM majors do not earn a STEM degree. A lack of academic preparation in science and mathematics has been offered as one explanation for the leaky STEM pipeline. The purpose of this research was to develop STEM readiness benchmarks to provide prospective students more tailored information on the level of knowledge and skills needed to have a reasonable chance of success in first-year STEM courses. The research had three components: Study 1-identified the mathematics and science courses that STEM majors take most often in the first year of college. In mathematics, the most prevalent course was Calculus. In science, multiple courses were identified as typically taken by STEM majors: Biology, Chemistry, Engineering, and Physics, Study 2-derived empirically based STEM readiness benchmarks in mathematics and science by estimating the ACT Mathematics and Science test scores associated with a 50% probability of earning a grade of a B or higher in the identified STEM courses. Specifically, the median ACT Mathematics score associated with a 50% probability of earning a B or higher grade in Calculus is 27. The median ACT Science score associated with a 50% probability of earning a B or higher grade in Chemistry, Biology, Physics, or Engineering is 25, and Study 3-validated the STEM readiness benchmarks on more distal indicators of success. Results demonstrated that STEM majors who met the STEM readiness benchmarks were more likely to earn a cumulative grade point average of 3.0 or higher, persist in a STEM major, and earn a STEM-related bachelor's degree. Providing STEM readiness information to prospective students may help facilitate the transition to college by aligning students' expectations with course demands. An appendix contains additional tables.
- Published
- 2015
24. Considering Practical Uses of Advanced Placement® Information in College Admission. Research Note 2014-1
- Author
-
College Board, Shaw, Emily J., Marini, Jessica, and Mattern, Krista D.
- Abstract
The Study evaluated the predictive validity of various operationalizations of AP® Exam and course information that could be used to make college admission decisions. The incremental validity of different AP variables, above and beyond traditional admission measures such as SAT® and high school grade point average (HSGPA), in predicting first-year grade point average (FYGPA) was also explored. The AP variables examined included the following: the number of AP Exams a student took, the number of AP Exams a student took on which he or she received a score of 3 or higher, the proportion of the number of AP Exams the student took in relation to the number of AP courses offered at his or her high school, his or her average AP score, followed by the number of AP scores the student received that were greater than or equal to 3. With regard to the incremental validity of the different AP predictors above and beyond HSGPA and SAT scores to predict FYGPA, we found that the AP Average score variable produced the greater increment. This report discusses the practical implications of these results in using AP information, in addition to traditional admission measures to improve admission decisions. The following are appended: (1) Descriptive Statistics of Study Variables; (2) Subgroup Differences by AP Measures; (3) Raw and Corrected Correlations between Predictors and FYGPA; (4) Incremental Validity of AP Measures above and beyond SAT and HSGPA to Predict FYGPA; (5) Scatterplots of mean FYGPA by the six AP variables; and (6) Mean FYGPA by AP Average Score for all students with a HSGPA = 4.00 and SAT CR+M+W = 1800.
- Published
- 2014
25. A Model-Based Examination of College Outcomes for AP® Fee Reduction Students. Research Report
- Author
-
College Board, Wyatt, Jeffrey N., and Mattern, Krista D.
- Abstract
A recent report by Wyatt and Mattern (2011) compared college outcomes for low-socioeconomic status (low-SES) students who received Advanced Placement® (AP®) fee reductions versus low-SES students who did not participate in the AP Program. The results indicated that AP Fee Reduction students had better college outcomes than students from low-SES backgrounds who did not participate in AP. The results were parsed by gender, ethnicity, HSGPA, SAT® score, and highest parental education level to evaluate whether the AP effect remained after considering these variables. In general, there was still an AP effect; however, these analyses only controlled for one variable at a time. This report describes a follow-up study employing more rigorous methods to determine whether an AP effect remains when all demographic and academic variables are simultaneously controlled for using regression analysis. Results reveal that after controlling for gender, ethnicity, HSGPA, SAT score, and highest parental education level concurrently, low-SES students who participated in AP through the fee reduction program were more likely to enroll in a four-year college, transfer to a four-year college from a two-year college, earn higher college grades, and ultimately graduate college as compared to low-SES students not participating in AP.
- Published
- 2014
26. Broadening the Definition of College and Career Readiness: A Holistic Approach. ACT Research Report Series, 2014 (5)
- Author
-
ACT, Inc., Mattern, Krista, Burrus, Jeremy, and Camara, Wayne
- Abstract
A hallmark of the US education system is the opportunity afforded to students to pursue education and career paths of their own choosing. This flexibility and autonomy, however, has drawbacks. Students must navigate a series of complex and often disconnected environments, as well as numerous decision points, before they attain a fulfilling career. The purpose of this paper is to demonstrate that while core academic skills are necessary, they are not sufficient for academic and workplace success, and that a holistic approach to College and Career Readiness is needed. This paper is divided into five sections. Section 1 summarizes the current evidence on both the importance of having a well-educated nation and the reality that the current educational system is leaving a large percentage of students unprepared for college and work. Potential solutions to better foster College and Career Readiness are proposed, Section 2 describes previous attempts to operationalize College and Career Readiness and their limitations, notably the narrow focus on core academic skills, Section 3 includes a review of the existing empirical evidence on the predictors of educational and workplace success, and clearly indicates that success in education and at work is a function of a wide range of cognitive and noncognitive skills, Section 4 addresses some of the barriers that have impeded the inclusion of noncognitive skills in definitions, assessments, and reporting of College and Career Readiness. Section 5 describes ACT's move toward a more holistic approach to College and Career Readiness.
- Published
- 2014
27. Synthesis of Recent SAT Validity Findings: Trend Data over Time and Cohorts. Research in Review 2014-1
- Author
-
College Board, Mattern, Krista D., and Patterson, Brian F.
- Abstract
In March 2005, substantial revisions were made to the SAT, to better align test specifications with K-12 curriculum (Lawrence, Rigol, Van Essen & Jackson, 2003). Over the last five years, the College Board has made a concerted effort to collect higher education outcome data to document evidence of the validity of the SAT for use in college admission in light of these changes to the test specifications. Due to this large-scale data collection initiative, numerous reports have been released documenting the validity of the SAT for use in college admission. However, the information is siloed within individual reports, making it particularly difficult to synthesize the results and get a sense of the main take-away points. The purpose of the current report is to summarize the research findings from the various reports into a single document, illuminating patterns across cohorts and years. The document will serve as an overview of the research done to date, in a straightforward, easily digestible manner. The report relies heavily on graphical representations of the data to elucidate the main findings; however, data in tabular form are also provided in appendices for interested readers.
- Published
- 2014
28. Comparing Academic Readiness Requirements for Different Postsecondary Pathways: What Admissions Tests Tell Us
- Author
-
Steedle, Jeffrey T., Radunzel, Justine, and Mattern, Krista D.
- Published
- 2019
29. How Useful Are Traditional Admission Measures in Predicting Graduation within Four Years? Research Report 2013-1
- Author
-
College Board, Mattern, Krista D., Patterson, Brian F., and Wyatt, Jeffrey N.
- Abstract
Research has consistently shown that traditional admission measures--SAT® scores and high school grade point average (HSGPA)--are valid predictors of early college performance such as first-year grades; however, their usefulness to predict later college outcomes has been questioned, especially for the SAT. This study builds on previous research showing that both SAT scores and HSGPA are predictive of a more distal measure of college success--college graduation within four years. Moreover, each measure provided unique information to the prediction of graduation, indicating the utility of using both measures in the admission process to elect applicants who are most likely be successful. Finally, the relationships between SAT and HSGPA with four-year graduation rates by institutional control and selectivity (i.e., undergraduate admittance rate) were also investigated. The findings demonstrate the usefulness of traditional admission measures for predicting long-term college outcomes.
- Published
- 2013
30. Does College Readiness Translate to College Completion? Research Note 2013-9
- Author
-
College Board, Mattern, Krista D., Shaw, Emily J., and Marini, Jessica
- Abstract
The current study examines the relationship between the SAT® College Readiness Benchmark with the outcome of graduation from college in either four or six years. The results indicate that the SAT benchmark is indeed differentiating between those students who graduate within four years and those who do not, as well as between those who graduate within six years and those who do not. The data were further disaggregated by student characteristics of gender, ethnicity, best spoken language, household income, and highest parental education. Even within student subgroups, differences in graduation rates for students who were college ready versus those who did not meet the SAT benchmark persisted. The results from the current study provide additional validity evidence for the use of the benchmark as a measure of college readiness and as a crucial tool in guiding educational interventions and policy that promote college success for all students.
- Published
- 2013
31. Validity of the SAT® for Predicting First-Year Grades: 2010 SAT Validity Sample. Statistical Report 2013-2
- Author
-
College Board, Patterson, Brian F., and Mattern, Krista D.
- Abstract
The continued accumulation of validity evidence for the core uses of educational assessments is critical to ensure that proper inferences will be made for those core purposes. To that end, the College Board has continued to follow previous cohorts of college students and this report provides updated validity evidence for using the SAT to predict first-year college grade point average (FYGPA) for the 2010 cohort. Colleges and universities (henceforth, "institutions") provided data on the cohort of first-time, first-year students enrolling in the fall of 2010. The College Board combined those college outcomes data with official SAT scores and SAT Questionnaire response data. In particular, 160 institutions provided data on 287,881 students with 211,403 having complete data on high school grade point average (HSGPA), SAT critical reading (SAT-CR), mathematics (SAT-M), and writing (SAT-W), and FYGPA. As has been shown in previous work (Kobrin, Patterson, Shaw, Mattern, & Barbuti, 2008; Patterson, Mattern, & Kobrin, 2009; Patterson & Mattern, 2011; Patterson & Mattern, 2012), the correlation of SAT section scores and HSGPA with FYGPA was strong (r = 0.63). When compared with the correlation of HSGPA alone with FYGPA (r = 0.54), the addition of the SAT section scores to HSGPA represented a substantial increase (?r = 0.09) in the correlation with FYGPA. The patterns of differential validity by institutional and student characteristics and differential prediction by student characteristics also follow the same general patterns, as has been shown in previous work (Mattern, Patterson, Shaw, Kobrin, & Barbuti, 2008; Patterson, et al., 2009; Patterson & Mattern, 2011; Patterson & Mattern, 2012). The following are appended: (1) Institutions Providing First-Year Outcomes Data for the 2010 Cohort; (2) Raw Correlations of SAT and HSGPA with FYGPA by Institutional Characteristics; and (3) Raw Correlation of SAT Scores and HSGPA with FYGPA by Subgroups.
- Published
- 2013
32. Validity of the SAT® for Predicting First-Year Grades: 2011 SAT Validity Sample. Statistical Report 2013-3
- Author
-
College Board, Patterson, Brian F., and Mattern, Krista D.
- Abstract
The continued accumulation of validity evidence for the intended uses of educational assessments is critical to ensure that proper inferences will be made for those purposes. To that end, the College Board has continued to collect college outcome data to evaluate the relationship between SAT® scores and college success. This report provides updated validity evidence for using the SAT to predict first-year college grade point average (FYGPA) for the 2011 cohort. Appended are: (1) Institutions Providing First-Year Outcomes for the 2011 Cohort; (2) Raw Correlations of SAT and SHGPA with FYGPA by Institutional Characteristics; and (3) Raw Correlation of SAT and HSGPA with FYGPA by Subgroups.
- Published
- 2013
33. The Relationship between SAT® Scores and Retention to the Second Year: Replication with the 2010 SAT Validity Sample. Statistical Report 2013-1
- Author
-
College Board, Mattern, Krista D., and Patterson, Brian F.
- Abstract
The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the revised SAT for use in college admission. A study by Mattern and Patterson (2009) examined the relationship between SAT scores and retention to the second year. The sample included first-time, first-year students entering college in fall 2006, with 106 of the original 110 participating institutions providing data on retention to the second-year. Results showed that SAT performance was related to retention, even after controlling for relevant student and institutional characteristics. Replication studies have been conducted for subsequent entering cohorts of students and similar results were found (Mattern & Patterson, 2011, 2012a, 2012b). Replicating the analyses of the previous four reports (Mattern & Patterson, 2009; 2011, 2012a, 2012b), the current study examined the relationship between SAT performance and retention to the second year for first-time, first-year students that began in the fall of 2010. A total of 160 institutions provided data which translated to 287,881 students. Students without SAT scores, self-reported high school grade point average (HSGPA), or retention data were removed from analyses, resulting in a final sample size of 215,704 students. The results from the current study based on the 2010 sample show the same pattern of results as the previous reports. Namely, higher SAT scores are associated with higher retention rates. This was true, even after controlling for student characteristics (gender, race/ethnicity, household income, parental education, and HSGPA) and institutional characteristics (control, size, and undergraduate admittance rate). [For "The Relationship between SAT Scores and Retention to the Second Year: Replication with 2009 SAT Validity Sample," see ED563088.]
- Published
- 2013
34. Are AP® Students More Likely to Graduate from College on Time? Research Report 2013-5
- Author
-
College Board, Mattern, Krista D., Marini, Jessica P., and Shaw, Emily J.
- Abstract
The current study examined the role of AP® Exam participation and performance on four-year college graduation in four years. Because students who take AP Exams can earn college credit while still in high school, it was expected that AP students would have higher four-year graduation rates. Moreover, it was expected that AP students who earned higher exam scores would also have a higher likelihood of graduating within four years compared to AP students who do not perform well on the exam because academic performance across a variety of measures has been positively linked to graduation. Two national samples were used to test these research questions, and the results confirmed a positive relationship between both AP Exam participation and performance with graduation within four years. This relationship was evident even after controlling for relevant institutional- and/or student-level factors. The academic and financial benefits of the AP Program are discussed. The following is appended: Table A1--HGLM Equations for the AP Exam Participation and Performance Models Examined.
- Published
- 2013
35. Identifying Students at Risk for Leaving an Institution: SAT and HSGPA as Tools to Improve Retention
- Author
-
College Board, Shaw, Emily J., and Mattern, Krista D.
- Abstract
The current study will explore the validity and potential of using the SAT, in conjunction with HSGPA, to arrive at a predicted FYGPA to improve student retention at four-year postsecondary institutions. Specifically, this study examined whether college students who did not perform as expected (observed FYGPA minus predicted FYGPA) were more likely to leave their institution. Results showed that both under- and over-performing students were more likely to leave college as compared to their academically similar peers who performed as expected. Recommendations for institutions to incorporate this information as part of a cost-effective and efficient detection tool to identify students that may be at risk for not completing their degrees and to help improve institutional retention rates are provided.
- Published
- 2012
36. The Relationship between SAT® Scores and Retention to the Second Year: 2008 SAT Validity Sample. Statistical Report 2012-1
- Author
-
College Board, Mattern, Krista D., and Patterson, Brian F.
- Abstract
The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the revised SAT®, which consists of three sections: critical reading (SAT-CR), mathematics (SAT-M), and writing (SAT-W), for use in college admission. A study by Mattern and Patterson (2009) examined the relationship between SAT scores and retention to the second year of college. The sample included first-time first-year students entering college in fall 2006, with 106 of the original 110 participating institutions providing data on retention to the second year. Results showed that SAT performance was related to retention, even after controlling for HSGPA. The following year, previously participating as well as new colleges and universities were invited to provide first-year performance data on the first-time first-year students who entered college in the fall of 2007. For the 2007 sample, a total of 72 of the original 110 institutions and 38 new institutions provided data. The 110 institutions in the 2007 sample contained 216,081 students. A replication of the Mattern and Patterson study (2009) was conducted with the new cohort and similar results were found (Mattern & Patterson, 2011). Similarly, previously participating as well as new colleges and universities were invited to provide first-year performance data on the first-time first-year students who began in the fall of 2008. For the 2008 cohort of students, a total of 129 institutions provided data on a total of 246,652 students. Students without SAT scores, self-reported high school grade point average (HSGPA), or retention data were removed from analyses, resulting in a final sample size of 173,963 students. Replicating the analyses of the previous two reports (Mattern & Patterson, 2009; 2011), the tables below are based on the 2008 sample, and the findings are largely the same as those found in the earlier reports Results show that SAT performance is positively related to second-year retention rates, even after controlling for student and institutional characteristics. This was also true within HSGPA bands, showing that SAT scores provide incremental value over high school grades in predicting retention. Furthermore, controlling for SAT performance is seen to reduce and in some cases eliminate the differences in retention rates between student and institutional subgroups that are otherwise observed. [For "The Relationship between SAT Scores and Retention to the Second Year: 2007 SAT Validity Sample," see ED563083.]
- Published
- 2012
37. Are the Best Scores the Best Scores for Predicting College Success?
- Author
-
Patterson, Brian F., Mattern, Krista D., and Swerdzewski, Peter
- Abstract
The College Board's SAT[R] Score Choice[TM] policy allows students to choose which set(s) of scores to send to colleges and universities to which they plan to apply. Based on data gathered before the implementation of that policy, the following study evaluated the predictive validity of the various sets of SAT scores. The value of five score sets for predicting first-year college grade-point average (FYGPA) was examined using a sample of 150,377 students from 110 institutions. These sets of scores were from students': (1) first administration, (2) latest administration, (3) section averages, (4) highest single administration, and (5) highest individual sections. The various score sets were found to have nearly equal predictive validity for the total group, by various institutional characteristics, and regardless of how often students tested. (Contains 6 tables and 2 endnotes.)
- Published
- 2012
38. A Standard-Setting Study to Establish College Success Criteria to Inform the SAT® College and Career Readiness Benchmark. Research Report 2012-3
- Author
-
College Board, Kobrin, Jennifer L., Patterson, Brian F., Wiley, Andrew, and Mattern, Krista D.
- Abstract
In 2011, the College Board released its SAT college and career readiness benchmark, which represents the level of academic preparedness associated with a high likelihood of college success and completion. The goal of this study, which was conducted in 2008, was to establish college success criteria to inform the development of the benchmark. The College Board convened a panel comprised of experts in educational policy and higher education to review data showing the relationship between SAT scores and college performance. Panelists were asked to provide two sets of ratings on what first-year college GPA (FYGPA) should be used to define the criterion for success in the first year of college; and two sets of ratings to define the probability level for a successful student attaining that FYGPA (probability of mastery). The mean FYGPA rating from the second round was 2.62 (with a median of 2.67), and the mean and median rating for probability of mastery was 70%. The SAT score associated with the panelists' final ratings was approximately 1580. Three appendices are included: (1) Sample Rating Form; (2) Percentage of Students in the SAT Validity Study Earning Different FYGPA by SAT Score Category, and by Gender and Ethnic Subgroups (table); and (3) Standard-Setting Evaluation Survey.
- Published
- 2012
39. The Validity of the SAT® for Predicting Cumulative Grade Point Average by College Major. Research Report 2012-6
- Author
-
College Board, Shaw, Emily J., Kobrin, Jennifer L., Patterson, Brian F., and Mattern, Krista D.
- Abstract
The current study examined the differential validity of the SAT for predicting cumulative GPA (cGPA) through the second year of college by college major, as well as the differential prediction of cGPA by college major across student subgroups. The relationship between the SAT and cGPA varied somewhat by major, as well as by major and subgroup (e.g. gender, ethnicity,and parental education level). This variability was likely due to differences in the nature of the college course work,grading practices, student self-selection, and academic cultures (e.g., male dominated or highly competitive) across majors. The findings from this study may be particularly relevant to colleges and universities in examining different admission criteria for acceptance to specialized colleges and major programs within an institution, and thus it could serve as a comprehensive resource for higher education researchers examining college major and performance. Supplemental tables are appended.
- Published
- 2012
40. The Validity of the Academic Rigor Index (ARI) for Predicting FYGPA. Research Report 2012-5
- Author
-
College Board, Mattern, Krista D., and Wyatt, Jeffrey N.
- Abstract
A recurrent trend in higher education research has been to identify additional predictors of college success beyond the traditional measures of high school grade point average (HSGPA) and standardized test scores, given that a large percentage of unaccounted variance in college performance remains. A recent study by Wyatt, Wiley, Camara, and Proestler (2012) expanded the definition of college readiness beyond test scores and HSGPA to include a measure of the academic rigor or challenge associated with a student's course work in high school, referred to as the academic rigor index (ARI). This study represents the first examination of the validity of ARI in predicting first-year grade point average (FYGPA). The correlation between ARI and FYGPA indicated a moderate effect overall and by gender, ethnicity, and household income subgroups; however, ARI did not add incremental validity above SAT scores and HSGPA. Additionally, when added to SAT scores and HSGPA, ARI had no impact on differential prediction by relevant subgroups. Given the current movement toward a more holistic assessment of college applicants, a standardized measure of the academic rigor of a student's course load in high school suggests a promising additional measure to the assessment of a student's level of college readiness. Table A-1 is appended.
- Published
- 2012
41. The Validity of SAT® Scores in Predicting First-Year Mathematics and English Grades. Research Report 2012-1
- Author
-
College Board, Mattern, Krista D., Patterson, Brian F., and Kobrin, Jennifer L.
- Abstract
This study examined the validity of the SAT for predicting performance in first-year English and mathematics courses. Results reveal a significant positive relationship between SAT scores and course grades, with slightly higher correlations for mathematics courses compared to English courses. Correlations were estimated by student characteristics (gender, ethnicity, and best language), institutional characteristics (size, selectivity, and control, i.e., private or public), and course content (e.g., calculus, algebra). The findings suggest that performance on the SAT is predictive of performance in specific college courses. Furthermore, stronger relationships were found between test scores and grades when the content of the two were aligned (such as the SAT mathematics section and mathematics course grades, or the SAT writing section and English course grades). Supplemental tables are appended.
- Published
- 2012
42. The Relationship between SAT Scores and Retention to the Second Year: Replication with 2009 SAT Validity Sample. Statistical Report 2011-3
- Author
-
College Board, Mattern, Krista D., and Patterson, Brian F.
- Abstract
The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the revised SAT for use in college admission. A study by Mattern and Patterson (2009) examined the relationship between SAT scores and retention to the second year of college. The sample included first-time, first-year students entering college in fall 2006, with 106 of the original 110 participating institutions providing data on retention to the second year. The results showed that SAT performance was related to retention, even after controlling for relevant student and institutional characteristics. Replication studies have been conducted for subsequent entering cohorts of students, and similar results were found (Mattern & Patterson, 2011a, 2011b). Replicating the analyses of the previous three reports (Mattern & Patterson, 2009; 2011a, 2011b), the current study examined the relationship between SAT performance and retention to the second year for first-time, first-year students that began in the fall of 2009. A total of 131 institutions provided data which translated to 262,949 students. Students without SAT scores, self-reported high school grade point average (HSGPA), or retention data were removed from analyses, resulting in a final sample size of 199,366 students. The results from the current study based on the 2009 sample show the same pattern of results as the previous reports. Namely, higher SAT scores are associated with higher retention rates. This was true, even after controlling for student characteristics (gender, race/ethnicity, household income, parental education, and HSGPA) and institutional characteristics (control, size, and selectivity). [For "The Relationship between SAT® Scores and Retention to the Second Year: 2008 SAT Validity Sample," see ED563084.]
- Published
- 2012
43. It's Major! College Major Selection & Success
- Author
-
College Board, Byers, Jenny, Mattern, Krista D., Shaw, Emily J., and Springall, Robert
- Abstract
Presented at the College Board National Forum, October 26, 2011. Choosing a college major is challenging enough, without stopping to consider the impact it has on a student's college experience and career choice. To provide support during this major decision, participants in this session will develop strategies to facilitate students in making an appropriate major choice. Based on research and recommendations from high school and college educators, participants will discuss interventions and best practices. Participants will focus on factors related to major choice and retention (with an emphasis on STEM majors), and will investigate some noteworthy findings related to students with undeclared majors.
- Published
- 2011
44. The Validity of the SAT for Predicting Cumulative Grade Point Average by College Major
- Author
-
College Board, Shaw, Emily J., Kobrin, Jennifer L., Patterson, Brian F., and Mattern, Krista D.
- Abstract
Presented at the Annual Meeting of the American Educational Research Association (AERA) in New Orleans, LA in April 2011. The current study examined the differential validity of the SAT for predicting cumulative GPA through the second-year of college by college major, as well as the differential prediction of cumulative GPA by college major among student subgroups. The relationship between the SAT and cumulative GPA varied somewhat by major, as well as by major and subgroup, likely due to differences in the nature of the college coursework, grading practices, student self-selection, and differing academic cultures (e.g., male dominated or highly competitive) across majors.
- Published
- 2011
45. The Relationship between SAT Scores and Retention to the Second Year: 2007 SAT Validity Sample. Statistical Report No. 2011-4
- Author
-
College Board, Office of Research and Development, Mattern, Krista D., and Patterson, Brian F.
- Abstract
This report presents the findings from a replication of the analyses from the report, "Is Performance on the SAT Related to College Retention?" (Mattern & Patterson, 2009). The tables presented herein are based on the 2007 sample and the findings are largely the same as those presented in the original report, and show SAT scores are related to second-year retention. Even after controlling for student and institutional characteristics, returners had higher SAT total scores than non-returners, by an average of 116 points. This held true even within each subgroup analyzed, meaning the SAT performance gap is not due to differences in the demographic characteristics of the two groups. Also, this report finds that differences in retention rates by student subgroups are minimized and in some instances eliminated when controlling for SAT performance. This is particularly noticeable with respect to differences in retention rates by ethnicity.
- Published
- 2011
46. Validity of the SAT for Predicting First-Year Grades: 2008 SAT Validity Sample. Statistical Report No. 2011-5
- Author
-
College Board, Office of Research and Development, Patterson, Brian F., and Mattern, Krista D.
- Abstract
The findings for the 2008 sample are largely consistent with the previous reports. SAT scores were found to be correlated with FYGPA (r = 0.54), with a magnitude similar to HSGPA (r = 0.56). The best set of predictors of FYGPA remains SAT scores and HSGPA (r = 0.63), as the addition of the SAT sections to the correlation of HSGPA alone with FYGPA leads to a substantial improvement in prediction (?r = 0.07). This finding was consistent across all subgroups of the sample, by both institutional characteristics and demographics (?r = 0.06). All correlations presented here have been corrected for restriction of range, but the same basic patterns hold for the raw correlations. The following are appended: (1) Institutions Providing First-Year Outcomes Data for the 2008 Cohort; (2) Raw Correlations of SAT and HSGPA with FYGPA by Institutional Characteristics ; and (3) Raw Correlation of SAT Scores and HSGPA with FYGPA by Subgroups.
- Published
- 2011
47. Validity of the SAT for Predicting Second-Year Grades: 2006 SAT Validity Sample. Statistical Report No. 2011-1
- Author
-
College Board, Office of Research and Development, Mattern, Krista D., and Patterson, Brian F.
- Abstract
This report presents the validity of the SAT for predicting two second-year outcomes: (1) second-year cumulative GPA (2nd Yr Cum GPA), and (2) second-year grade point average (2nd Yr GPA). Similar to the results for first-year grade point average (1st Yr GPA), the SAT is strongly correlated with second year outcomes. For many significant subgroups, such as ethnic minority students and female students, the SAT was in fact a better predictor of 2nd Yr Cum GPA and 2nd Yr GPA than were high school grades alone. However, for all students, SAT score in combination with high school grades was the best predictor of these second year outcomes since both measures provide incrementally validity over each other. For example, even within HSGPA levels, there is still a strong positive relationship between SAT and 2nd Yr Cum GPA and 2nd Yr. An appendix lists the institutions providing second-year data on the 2006 freshman cohort.
- Published
- 2011
48. Examining the Linearity of the PSAT/NMSQT®-FYGPA Relationship. Research Report 2011-7
- Author
-
College Board, Marini, Jessica P., Mattern, Krista D., and Shaw, Emily J.
- Abstract
There is a common misperception that test scores do not predict above a minimum threshold (Sackett, Borneman, & Connelly, 2008). That is, test scores may be useful for identifying students with very low levels of ability; however, higher scores are considered unrelated to higher performance for those above a certain threshold. This study aims to examine whether this is true for the Preliminary SAT/National Merit Scholarship Qualifying Test (PSAT/NMSQT), which is used for that very purpose -- to differentiate among very high performing students. The linearity of the relationship between PSAT/NMSQT scores and first-year college GPA (FYGPA) was explored in this paper, using a regression approach. This relationship was explored over the entire range of the PSAT/NMSQT score scale, known as the Selection Index, ranging from 60 to 240 as well as the upper end of the score scale (= 200), where initial screening decisions are made for scholarship programs conducted by National Merit Scholarship Corporation (NMSC). For the full PSAT/NMSQT scale, the addition of a quadratic term improved model fit; however, the effect size was small as indexed by the change in the squared multiple correlation coefficient (R2) of 0.001. That is, including PSAT/NMSQT Selection Index2 in the model accounted for an additional 0.1% of variance in FYGPA. As for the subset of students who had a PSAT/NMSQT score of 200 or higher, the results indicated a strong linear relationship, which suggests that even among very high-scoring students, the PSAT/NMSQT score scale differentiates between students in terms of academic success measured by grades earned in the first year of college. In sum, the results of this study support the use of the PSAT/NMSQT as a screening tool for selecting Merit Scholarship winners.
- Published
- 2011
49. Examination of College Performance by National Merit Scholarship Program Recognition Level. Research Report 2011-10
- Author
-
College Board, Marini, Jessica P., Mattern, Krista D., and Shaw, Emily J.
- Abstract
The current study examined the validity of the selection process used for the National Merit Scholarship Program (NMSP) to identify scholarship winners. Namely, this study examined whether students who advanced to higher NMSP recognition levels (Commended Students, Semifinalists, and various levels of award winners) had higher college performance, as indexed by first-year college grades and second-year retention rates. Based on a sample of nearly 400,000 college students, the results indicated that students who advance to higher NMSP recognition levels did earn higher FYGPAs and were more likely to return for their second year of college. In sum, these findings provide validity evidence in support of the NMSP selection process for identifying students who are most likely to succeed in college and deserving of a National Merit Scholarship.
- Published
- 2011
50. The Relationship between SAT® Scores and Retention to the Fourth Year: 2006 SAT Validity Sample. Statistical Report 2011-6
- Author
-
College Board, Mattern, Krista D., and Patterson, Brian F.
- Abstract
The College Board formed a research consortium with four-year colleges and universities to build a national higher education database with the primary goal of validating the SAT® for use in college admission. The first sample included first-time, first-year students entering college in fall 2006, with 110 institutions providing students' first-year course work, grades, and retention to the second year. In addition to examining the predictive validity of the SAT in terms of college grades (Kobrin, Patterson, Shaw, Mattern, & Barbuti, 2008; Mattern, Patterson, Shaw, Kobrin, & Barbuti, 2008), the relationship between SAT performance and retention to the second year was examined (Mattern & Patterson, 2009). The results found that higher SAT scores were associated with higher retention rates In the following years, participating colleges and universities were invited to provide subsequent performance data for these students in order to track them longitudinally throughout their college career. For the second year, 66 of the original 110 institutions provided data. Mattern and Patterson (2011) examined the relationship between SAT performance and retention to the third year of college. Similar to the results for second-year retention rates, higher SAT scores were associated with higher third-year retention rates. This study builds on this body of research by examining the relationship between SAT performance and retention to the fourth year of college. The sample consisted of 59 of the original 110 institutions. Complete data (i.e., SAT scores, self-reported high school grade point average (HSGPA), retention to second-, third-, and fourth-year data) were available for 78,640 students. Results show that SAT performance was positively related to fourth-year retention rates. Detailed results are provided in this report. A list of institutions providing retention to the fourth year data for the 2006 SAT validity sample is appended.
- Published
- 2011
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.