139,595 results on '"Testing"'
Search Results
2. A STUDY OF A MEASUREMENT RESOURCE IN CHILD RESEARCH, PROJECT HEAD START.
- Author
-
Southern Illinois Univ., Edwardsville., BOMMARITO, JAMES, and JOHNSON, ORVAL G.
- Abstract
MEASURES OF CHILD BEHAVIOR AND CHARACTERISTICS, NOT YET PUBLISHED AS SEPARATE ENTITIES, WERE COLLECTED THROUGH A PAGE-BY-PAGE SEARCH OF ISSUES OF 46 JOURNALS (LISTED IN APPENDIX A) PUBLISHED DURING THE PERIOD OF JANUARY 1956 TO DECEMBER 1965 AND 50 RELEVANT BOOKS. CORRESPONDENCE WITH RESEARCHERS AND AUTHORS OF MEASURES YIELDED ADDITIONAL MEASUREMENT RESOURCES. AS PRESENTED IN THE REPORT, THE MEASURES WERE GROUPED INTO SIX KINDS, (1) DEVELOPMENT, ACADEMIC APTITUDE, AND ACHIEVEMENT, (2) PERSONALITY, (3) ATTITUDES, (4) SOCIAL INTERACTION AND SKILLS, (5) PERCEPTUAL SKILLS, AND (6) MISCELLANEOUS. THE LISTING FOR EACH TEST INCLUDED ITS NAME, THE AUTHOR, THE AGE OF THE POPULATION FOR WHOM IT WAS DESIGNED, THE GENERAL AREA OF INTEREST, THE TYPE OF MEASURE, AND THE SOURCE FROM WHICH A COPY OF THE MEASURE MIGHT BE OBTAINED. A DESCRIPTION OF THE MEASURE (OFTEN QUOTING ITS AUTHOR) INCLUDED SAMPLE ITEMS AND AN OUTLINE OF THE ADMINISTRATIVE AND SCORING PROCEDURES. WHEN AVAILABLE, RELIABILITY AND VALIDITY DATA WERE BRIEFLY SUMMARIZED. A BIBLIOGRAPHICAL REFERENCE WAS PROVIDED FOR EACH MEASURE. (MS)
- Published
- 2024
3. Student Test Anxiety as Related to the Personality Characteristics of Their Teachers.
- Author
-
Sass, Edmund J. and Meyer, Marie
- Abstract
The research question investigated was: What relationship, if any, exists between teacher personality characteristics associated with self-actualization and the test anxiety levels of their students. Teacher personality characteristics in 6 schools were assessed with the Shostrom Personal Orientation Inventory. The Sarason Test Anxiety Scale for Children was administered to their pupils. Students in grades 4 through 8 were the subjects of the research. At the junior high school level it was found that a significant relationship existed between teachers with high spontaneity in combination with high self-regard and relatively low test anxiety in their students. Results approached significance for elementary school students. The implications of these findings for education and selection of teachers for the junior high level are considered. (JD)
- Published
- 2024
4. Innovations in Assessing Students' Digital Literacy Skills in Learning Science: Effective Multiple Choice Closed-Ended Tests Using Rasch Model
- Author
-
Fitria Lafifa and Dadan Rosana
- Abstract
This research goal to develop a multiple-choice closed-ended test to assessing and evaluate students' digital literacy skills. The sample in this study were students at MTsN 1 Blitar City who were selected using a purposive sampling technique. The test was also validated by experts, namely 2 Doctors of Physics and Science from Yogyakarta State University. The test instrument was developed based on five aspects of digital literacy skills: information, communication, content creation, security and problem-solving. Data have been analyzed descriptively and inferentially using the Rasch version and the assist of Quest software. The results showed that eight multiple-choice closed-ended test instruments were declared valid based on expert validation with an Aiken V value of 1.00. The reliability result is 0.97 with a very high category, and the INFIT MNSQ standard deviation value is 0.86-1.16, so seven items are by the Rasch model. Thus, the seven items in the multiple-choice closed-ended test instrument can be used to assessing and evaluate students' digital literacy skills in learning science.
- Published
- 2024
5. Deconstructing the Testing Mode Effect: Analyzing the Difference between Writing and No Writing on the Test
- Author
-
Daniel M. Settlage and Jim R. Wollscheid
- Abstract
The examination of the testing mode effect has received increased attention as higher education has shifted to remote testing during the COVID-19 pandemic. We believe the testing mode effect consists of four components: the ability to physically write on the test, the method of answer recording, the proctoring/testing environment, and the effect testing mode has on instructor question selection. This paper examines the first component, the ability to write on the test, which we believe is a neglected area of study. Using a normalization technique to control for student aptitude and instructor bias, we find that removing the ability of students to physically write on the test significantly lowers student performance. This finding holds across multiple question types classified by difficulty level, Bloom's taxonomy, and on figure/graph-based questions, and has implications for testing in both face-to-face and online environments.
- Published
- 2024
6. Effectiveness of Online Testing versus Traditional Testing: A Comparative Study of Saudi Female College Students
- Author
-
Asma M. Abumalik and Fatmah A. Alqahtani
- Abstract
Testing is an effective method to determine learning outcomes for knowledge and skills learning domains. The aim of this study was to examine the differences in test achievements among 50 Saudi female English major students at the College of Languages at Princess Nourah University in Riyadh. The tests were administered using two different methods: paper-based and Blackboard-based (online). Additionally, the study explored the impact of these two test methods on students' achievement in terms of course learning outcomes. The results of the study indicated that there was no significant difference between the two test methods in terms of overall test scores. However, it was found that the Blackboard-based test resulted in slightly higher scores for knowledge domain outcomes, while the paper-based test showed higher scores for skills domain outcomes. The results obtained in this study suggest that both paper-based and Blackboard-based test methods can be equally effective at assessing the general achievement of students. However, the choice of test method may have a slight impact on the specific learning outcomes being assessed, with Blackboard-based tests favouring knowledge domain outcomes and paper-based tests favouring skills domain outcomes. Furthermore, when using Blackboard-based tests, time pressure should be taken into consideration, as it is observed to significantly influence students' performance in both learning domains.
- Published
- 2024
7. Training Needs of In-Service EFL Teachers in Language Testing and Assessment
- Author
-
Gamze Sariyildiz Canli and I?smail Firat Altay
- Abstract
The present research reports the findings of the study that sought to examine the in-service training needs of in-service EFL teachers in the language testing and assessment field. With this aim, four domains of language testing and assessment field were identified which were classroom-focused LTA, knowledge of testing and assessment, purposes of testing, and content and concepts of LTA. Quantitative research design was employed in this study and quantitative data was collected through a questionnaire. A total of 300 in-service EFL teachers working in different formal and non-formal education institutions participated in this study. The quantitative data was analyzed using descriptive statistics. Results indicated that these in-service EFL teachers need an intermediate level of further training in the LTA field. The area where need for training was expressed more was "content and concepts of LTA" and "classroom-focused LTA" was the domain where need for in-service training was revealed less. The findings of this study will serve as needs analysis for in-service programs and will be beneficial for curriculum development, teacher education programs, and more specifically assessment courses offered in pre- and INSET programs.
- Published
- 2024
8. NAEP 2024 Facts for Teachers
- Author
-
National Center for Education Statistics (NCES) (ED/IES) and Hager Sharp
- Abstract
The National Assessment of Educational Progress (NAEP) is an integral measure of academic progress over time. It is the largest nationally representative and continuing assessment of what our nation's students know and can do in various subjects such as civics, mathematics, reading, science, technology and engineering literacy, U.S. history, and writing. The program also provides valuable insights into students' educational experiences and opportunities to learn in and outside of the classroom. Elected officials, policymakers, and educators all use NAEP results to develop ways to improve education. The 2024 program will include: (1) assessments in mathematics and reading at grades 4, 8, and 12 and science at grade 8; (2) pilot testing in mathematics and reading at grades 4 and 8, to help improve future NAEP assessments and ensure that they continue to be a reliable measure of student achievement; (3) survey questionnaires for participating students, teachers, and principals, to provide a better understanding of factors that may be related to students' learning. This brief document highlights the NAEP 2024 assessment, teachers as essential partners in NAEP, and recent NAEP results. [For "NAEP 2023 Facts for Teachers. Grades 4, 8, and 12 Field Test," see ED627646.]
- Published
- 2024
9. A Literature Review of Online Exams in HE in Physics and Maths
- Author
-
Martin Braun
- Abstract
During the COVID pandemic, universities around the globe had to move not only their content delivery online, but also their assessments. Due to COVID causing significant upheaval in Higher Education (HE), this enforced experiment also afforded an opportunity to reflect on traditional, invigilated, closed book exams (ICBE) resulting in research and advice in this area. A systematic review of this academic and grey literature was performed concentrating on maths heavy physics examinations to investigate what guidance is given to examination writers, educators who prepare students for exams and HE examinees themselves. The literature review results were divided into: Advice for examiners who need to provide an uinvigilated open book exam (UOBE), discussions on cheating, advice for students and case studies. It was found that ICBEs were good at examining lower order cognitive skills, e.g. recall and understanding, but higher order skills, such as analysing and synthesising, are better examined with access to a larger range of resources. Guidance on making academic misconduct more difficult also suggested using higher order thinking skills in exam questions as responses to these types of tasks are more individual and getting outside help may be more difficult in a time constrained UOBE. Furthermore, literature encouraged reflection on the motivation for cheating and suggested that overly demanding assessment may encourage students to seek inappropriate help. The advice for students highlighted the need to prepare as thoroughly for a UOBE as they would for a traditional exam. Probably the thrust should change from pure memorization to students preparing their notes so that they can efficiently access their material to locate relevant parts for synthesis during a UOBE. Some of the case studies used statistical methods to investigate comparability of grades between UOBEs and ICBEs and some of the studies found them comparable, so a large shift of results may be due to other factors rather than the exam type. Other studies describe their approach and include stakeholder reflections. The main recommendation to exclude lower cognitive skills can pose a problem for maths heavy exams as they mainly assess how well an examinee has mastered these skills before building on them. However, it seems advisable to climb higher up Bloom's taxonomy if possible. Also, it may be conceivable to break up exams into shorter sections that require individual uploading before access to the next part is granted to reduce the possibility of outside help. Furthermore, individualised maths type problems could be achievable by using different data sets for a question. Student advice should highlight the differences between UOBEs and ICBEs so that they can prepare appropriately.
- Published
- 2024
10. A Theoretical Suggestion on Testing Measurement Invariance in Adapting Parametric Measurement Tools
- Author
-
Gökhan Iskifoglu
- Abstract
This research paper investigated the importance of conducting measurement invariance analysis in developing measurement tools for assessing differences between and among study variables. Most of the studies, which tended to develop an inventory to assess the existence of an attitude, behavior, belief, IQ, or an intuition in a person's characterological profile, ignored testing measurement invariance for equivalency between comparable variables. With this finding, measurers lack in true validity and reliability and suffer methodological bias and have little or no chance to figure out the true differences between variables being studied. This article, therefore, explains the necessity and use of measurement invariance analysis when a researcher wanted to develop a new measurement tool or adapt a tool from one source language to another target language. The types of measurement invariance levels mentioned in this study are configural-invariance, scalar invariance, metric invariance and structural invariance analysis. The approaches used to conduct those invariance models and the way they have been interpreted were all discussed in great detail with a robust collection of supportive literature.
- Published
- 2024
11. Assuring Academic Integrity of Online Testing in Fundamentals of Accounting Courses
- Author
-
Elizabeth Whitlow and Stephanie Metts
- Abstract
Demand for online courses continues to grow. To remain competitive, higher education institutions must accede to this demand while ensuring that academic rigor and integrity are maintained. The authors teach introductory Fundamentals of Financial and Managerial Accounting courses online. Previously, there was no proctoring of the exams. Prior experience teaching these courses led the professors to suspect a high likelihood that academic integrity on these tests was low and that cheating was high. To address academic integrity concerns, the professors utilized a remote proctoring service employing a lockdown browser with screen and webcam monitoring. The program monitors the students remotely, recording sound, video and the information appearing on the students' screens. The videos are reviewed for detectable instances of breach of academic integrity prior to releasing the grades. Data was collected and analyzed for the average exam scores prior to and after the implementation of the remote proctoring software. The data analysis reveals a significant difference in the two sets of scores, with the average exam scores after the implementation of the remote proctoring being significantly lower than the ones before implementation. These results indicate that concern about academic integrity in online test-taking in the accounting curriculum is valid.
- Published
- 2024
12. New York State Testing Program: Grades 6 and 7 English Language Arts Paper-Based Tests. Teacher's Directions. Spring 2024
- Author
-
New York State Education Department and NWEA
- Abstract
The New York State Education Department (NYSED) has a partnership with NWEA for the development of the 2024 Grades 3-8 English Language Arts Tests. Teachers from across the State work with NYSED in a variety of activities to ensure the validity and reliability of the New York State Testing Program (NYSTP). The 2024 Grades 6 and 7 English Language Arts Tests are administered in two sessions on two consecutive school days. Students are asked to demonstrate their knowledge and skills in the areas of reading and writing. Students will have as much time as they need each day to answer the questions in the test sessions within the confines of the regular school day. For Grades 6 and 7, the tests consist of multiple-choice (1-credit) and constructed-response (2- and 4-credit) questions. Each multiple-choice question is followed by four choices, one of which is the correct answer. Students record their multiple-choice responses on a separate answer sheet. For Session 1, students will write their responses to the constructed-response questions in their separate answer booklets. For Session 2, students will write their responses to these questions directly in their test booklets. By following the guidelines in this document, teachers help ensure that the test is valid, reliable, and equitable for all students. A series of instructions helps teachers organize the materials and the testing schedule.
- Published
- 2024
13. New York State Testing Program: English Language Arts, Mathematics, and Science Tests. School Administrator's Manual, 2024. Grades 3-8
- Author
-
New York State Education Department and NWEA
- Abstract
The instructions in this manual explain the responsibilities of school administrators for the New York State Testing Program (NYSTP) Grades 3-8 English Language Arts, Mathematics, and Grades 5 & 8 Science Tests. School administrators must be thoroughly familiar with the contents of the manual, and the policies and procedures must be followed as written so that testing conditions are uniform statewide. The appendices include: (1) Certificates; (2) A tracking log of secure materials; (3) Procedures for testing students with disabilities; (4) Testing accommodation information; (5) Documents to assist with material return; (6) Contact information; and (7) Information on the Nextera™ Administration System for computer-based testing. This "School Administrator's Manual" serves to guide school administrators in general test administration activities for both paper- and computer-based testing.
- Published
- 2024
14. The Impact of Online Instruction during the COVID Pandemic on MFTB and CBE Testing Outcomes
- Author
-
William Hahn and Christ Fairchild
- Abstract
The present study examines how online instruction during the COVID pandemic impacted learning and performs a partial replication of a study by Hahn et al. (2012), which compared students' testing outcomes of the Major Field Test in Business (MFTB) and the Comprehensive Business Exam (CBE). Our results find that online instruction during the 2020-2021 pandemic isolation period had no significant impact on pre- and post-COVID testing outcomes for either exam. It was further found that the question set employed by the CBE exam appears to have changed from the pre- to the post-COVID testing timeframes, making this exam questionable for assurance of learning purposes when comparing to prior year results.
- Published
- 2024
15. New York State Testing Program: English Language Arts Paper-Based Tests. Teacher's Directions, Spring 2024. Grades 3 and 4
- Author
-
New York State Education Department
- Abstract
The New York State Education Department (NYSED) has a partnership with NWEA for the development of the 2024 Grades 3-8 English Language Arts Tests. Teachers from across the State work with NYSED in a variety of activities to ensure the validity and reliability of the New York State Testing Program (NYSTP). The 2024 Grades 3 and 4 English Language Arts Tests are administered in two sessions on two consecutive school days. Students are asked to demonstrate their knowledge and skills in the areas of reading and writing. Students will have as much time as they need each day to answer the questions in the test sessions within the confines of the regular school day. For Grades 3 and 4, the tests consist of 1-credit multiple-choice questions, 2-credit constructed-response questions, and 4-credit (Grade 4 only) constructed-response questions. Each multiple-choice question is followed by four choices, one of which is the correct answer. Students record their multiple-choice responses on a separate answer sheet. For Session 1, students will write their responses to the constructed-response questions in their separate answer booklets. For Session 2, students will write their responses to these questions directly in the test booklets. By following the guidelines in this document, teachers help ensure that the test is valid, reliable, and equitable for all students. A series of instructions helps teachers organize the materials and the testing schedule.
- Published
- 2024
16. 2023-2024 Indiana Assessments Policy Manual
- Author
-
Indiana Department of Education (IDOE), Office of Student Assessment
- Abstract
The 2023-2024 Indiana Assessments Policy Manual communicates established guidelines regarding appropriate test administration in Indiana for key stakeholders including educators and Test Coordinators. This document contains policy guidance and appendices that delineate specific aspects of test implementation, including test security protocol, reporting, and monitoring. The Indiana Assessments Policy Manual applies to all statewide assessments, including ILEARN, I AM, Digital SAT School Day, IREAD-3, NAEP, and WIDA, unless otherwise noted. In addition, "corporation" includes traditional public schools, public charter schools, accredited non-public schools, and Choice schools, unless otherwise noted. All documents should be reviewed thoroughly to facilitate prompt access to information during test administration. The Indiana Department of Education (IDOE) publishes the 2023-2024 Accessibility and Accommodations Information for Statewide Assessments document to further outline policy regarding specific universal and designated features, accommodations, and protocol for students receiving non-standard testing. General information is included in this manual, but specific guidance related to student needs is thoroughly addressed in the supplemental appendices and supporting documents.
- Published
- 2024
17. 2023-2024 WIDA Assessment Guidance
- Author
-
Indiana Department of Education (IDOE), Office of Student Assessment
- Abstract
The Elementary and Secondary Education Act (ESEA), as amended by the Every Student Succeeds Act (ESSA), requires state education agencies to establish and implement standardized, statewide entrance and exit procedures for English learners (ELs). WIDA provides the English language proficiency placement and annual assessments administered in Indiana, which inform programmatic decisions such as initial identification of ELs and placement into an EL program. The "2023-2024 WIDA Assessment Guidance" provides information on the following 15 topics: (1) English Language Proficiency Requirements; (2) Participation Requirements; (3) WIDA Assessments in Grades K-12; (4) WIDA ACCESS Annual Assessments Testing Window; (5) Indiana EL Entrance and Exit Criteria; (6) Scheduling and Timing Guidance; (7) Translation of Directions in Native Language; (8) Test Results and Reporting; (9) User Roles and Responsibilities; (10) Training Requirements; (11) Testing Modes and Technology Guidance; (12) Common Testing Issues and Irregularities; (13) WIDA Accessibility Features and Accommodations; (14) WIDA Alternate ACCESS; and (15) Support and Resources.
- Published
- 2024
18. Vocational Values Scale: Initial Development and Testing of the Student Form (VVS-S)
- Author
-
Kokou A. Atitsogbe and Jean-Luc Bernaud
- Abstract
This manuscript aimed to develop an instrument assessing vocational values among students (VVS-S). The scale was developed in French using three different samples of Togolese participants for item development (N = 140), exploratory (N = 308) and confirmatory analyses (N = 300). It consists of 17 items divided into the five subscales of Power, Family, Helping, Salary, and Creativity. The correlational, higher-order, and bifactor models showed that these values could be considered independently. Moreover, four of the values correlated positively but weakly with life satisfaction. The VVS-S's usefulness for research and practice in counseling, particularly in sub-Saharan Africa, is discussed.
- Published
- 2024
- Full Text
- View/download PDF
19. Speech, Language, and Literacy in Children with Visual Impairments: The National Survey of Children's Health
- Author
-
Kyle K. Brouwer, Monica Gordon-Pershey, and Michelle Stransky
- Abstract
Data on attaining indicators of early speech, language, and literacy development, notably phonological awareness, among children with visual impairments (VI) are limited. This U.S. study utilized the "National Survey of Children's Health" (NSCH), 2016-2020, to observe the distinctive population of children with VI and speech, language, and literacy needs. Chi-square bivariate and multivariable logistic regression analyses established differences between children ages 3 to 5 years with VI (n = 186) and without VI (n = 25,354). Significant differences included lower parental education and higher rates of family poverty for children with VI. Significantly fewer children with VI had attained early phonological awareness (identifying initial sounds in words and word rhyming). Nearly three times more children with VI had been diagnosed with a speech or language disorder. Findings affirm that interventions address speech, language, and literacy development among children with VI, including explicit phonological awareness. Communication disorders research based on population health databases can inform evidence-based practice.
- Published
- 2024
- Full Text
- View/download PDF
20. Multimodal Data Fusion to Detect Preknowledge Test-Taking Behavior Using Machine Learning
- Author
-
Kaiwen Man
- Abstract
In various fields, including college admission, medical board certifications, and military recruitment, high-stakes decisions are frequently made based on scores obtained from large-scale assessments. These decisions necessitate precise and reliable scores that enable valid inferences to be drawn about test-takers. However, the ability of such tests to provide reliable, accurate inference on a test-taker's performance could be jeopardized by aberrant test-taking practices, for instance, practicing real items prior to the test. As a result, it is crucial for administrators of such assessments to develop strategies that detect potential aberrant test-takers after data collection. The aim of this study is to explore the implementation of machine learning methods in combination with multimodal data fusion strategies that integrate bio-information technology, such as eye-tracking, and psychometric measures, including response times and item responses, to detect aberrant test-taking behaviors in technology-assisted remote testing settings.
- Published
- 2024
- Full Text
- View/download PDF
21. Flipped Classroom Combined with WPACQ Learning Mode on Student Learning Effect -- Exemplified by Program Design Courses
- Author
-
Yu-Chen Kuo and Po-Jung Chang
- Abstract
In recent years, many schools have started implementing remote teaching due to the impact of the COVID-19. Flipped classroom has been proven to be a student-centered teaching method that can effectively improve learning effectiveness. Nonetheless, if students fail to watch the instructional videos prior to class or encounter difficulties during the learning process, it can result in unfavorable learning outcomes. Therefore, this study proposes the use of the flipped classroom combined with the WPACQ learning model for programming courses. During the pre-class phase, students engage in online collaboration with their classmates, working together to accomplish the tasks outlined in the flipped classroom learning sheet. They actively share essential summaries with peers, engage in collective problem-solving of concepts and processes, all with the ultimate goal of enhancing the effectiveness of their learning experience. This study will develop a learning system that combines the flipped classroom with the WPACQ learning model, guiding learners to adopt the WPACQ learning strategy. In this system, the experimental group learners go through stages including watching instructional videos (Watch), guided outline-style note-taking with peers (Peer-Summary), unit test (Assessment), error correction (Correction), and final questioning (Question). Through viewing peer-summary key notes and engaging in error correction activities with peers, learners cultivate the ability to summarize key points and deepen their understanding of the learning content. They reflect on their own learning process to identify areas for improvement. The research results demonstrate that learners who adopt the flipped classroom combined with the WPACQ learning model exhibit better learning effectiveness compared to those using the conventional flipped classroom learning model. Furthermore, this approach effectively helps learners enhance their learning motivation, self-efficacy, reflective abilities, programming learning attitudes, and reduce cognitive load.
- Published
- 2024
- Full Text
- View/download PDF
22. Florida Career and Professional Education Act. Technical Assistance Paper. Updated
- Author
-
Florida Department of Education, Division of Career and Adult Education
- Abstract
The purpose of this technical assistance paper is to assist education leaders and administrators in the consistent implementation of the Florida Career and Professional Education (CAPE) Act in Section 1003.491, Florida Statutes (F.S.). This technical assistance paper addresses questions on recent legislation, funding, and data reporting. [For the previous report, see ED616038.]
- Published
- 2023
23. K-12 Digital Infrastructure Brief: Defensible & Resilient. Version 1.0
- Author
-
Department of Education (ED), Office of Educational Technology and Cybersecurity & Infrastructure Security Agency (CISA)
- Abstract
This is the second in a series of five briefs published by the U.S. Department of Education Office of Educational Technology on the key considerations facing educational leaders as they work to build and sustain core digital infrastructure for learning. These briefs offer recommendations to complement the fundamental infrastructure considerations outlined in the 2017 update to Building Technology Infrastructure for Learning (ED589999). They are meant to provoke conversations, challenge conventions, and deepen understanding. These briefs have been purposefully designed to be easily consumed and shared. The needs, capabilities, and expectations of technology infrastructure vary significantly by context. A rural outdoor learning school in the mountainous American Southwest will face challenges and have needs much different than a district within an urban center along the East Coast with an all-digital curriculum. The recommendations within these briefs are meant to help build, augment, and sustain digital infrastructure supportive of learning no matter the location. America has made incredible progress in closing the digital access divide, providing an ever-greater proportion of students with access to broadband connectivity, devices, and digital resources. At the same time, we must acknowledge the last frontiers of connectivity can also present the most wicked problems of closing that divide. To help readers build solutions for their own contexts, these briefs offer examples from the field of those who faced pernicious challenges to connectivity, accessibility, cybersecurity, data privacy, and other infrastructure issues and designed solutions for their challenges. Education's digital infrastructure is officially considered critical infrastructure, and just as we work to provide physical infrastructure that is safe, healthy, and supportive for all students, we need to align resources to create digital infrastructure that is safe, accessible, resilient, sustainable, and future-proof.
- Published
- 2023
24. Impact of Different Practice Testing Methods on Learning Outcomes
- Author
-
Yavuz Akbulut
- Abstract
The testing effect refers to the gains in learning and retention that result from taking practice tests before the final test. Understanding the conditions under which practice tests improve learning is crucial, so four experiments were conducted with a total of 438 undergraduate students in Turkey. In the first study, students who took graded practice tests outperformed those who took them as ungraded practice. In the second study, students who took short-answer questions before the first exam and multiple-choice questions before the second exam scored higher on the second exam. In the third study, multiple-choice, short-answer and hybrid questions produced similar learning gains. In the fourth study, students who received detailed feedback immediately after class performed similarly to those who received feedback at the beginning of the next class. The results suggested the contribution of graded practice tests in general; however, the type of questions or the timing of feedback did not predict learning outcomes.
- Published
- 2024
- Full Text
- View/download PDF
25. Brief Meditation on Test Anxiety of Eighth Grade Chinese Students: Chain-Mediating Roles of Mindfulness and Self-Efficacy
- Author
-
Ning Yue, Chieh Li, Shangwen Si, Shanshan Xu, Qin Zhang, and Lixia Cui
- Abstract
Numerous studies have revealed an alarming prevalence of test anxiety among Chinese junior high school students. Prevention of test anxiety in this population has become crucial. Brief meditation intervention (BMI) has shown promising results for promoting students' well-being and reducing test anxiety, but its mechanism for reducing test anxiety remains unknown. This study examined the effects of BMI and the roles mindfulness and self-efficacy play in mediating between the intervention and test anxiety. The BMI was optimized in content and form and culturally tailored for Chinese eighth graders. It includes guided meditation with relaxation music, mindful breathing and body scanning, and positive suggestions that promote self-efficacy. Six eighth grade classes at an urban junior high school in Beijing (N = 202, M[subscript age] = 14.14, 102 males) were assigned to either a BMI group (3 classes, N = 103) or a control group (3 classes, N = 99). Test anxiety, mindfulness, and self-efficacy measures were administered before, in the middle of, immediately after, and one month after the intervention. Repeated measures ANOVA and mediation analysis indicated that BMI had a significant effect on reducing test anxiety and enhancing mindfulness and self-efficacy over time. The study also found that mindfulness and self-efficacy played a chain of mediating roles in the relationship between BMI and test anxiety. The mediation effect value accounted for 68.35% of the intervention effects.
- Published
- 2024
- Full Text
- View/download PDF
26. Developing Collective Eyes for Iranian EFL Teachers' Computer-Assisted Language Assessment Literacy through Internet-Based Collaborative Reflection
- Author
-
Rajab Esfandiari and Mohammad Hossein Arefian
- Abstract
Computer-assisted language assessment literacy (LAL) has gained momentum in English as a foreign language (EFL) teaching since computer-based education became particularly common in language education. As EFL teachers play a critical role in administering computer-assisted language assessments, Iranian EFL teachers need to learn, relearn, and improve their computer-assisted LAL. Iranian EFL teachers can use collaborative reflection (CR) to improve their computer-assisted LAL and practices. This study, therefore, qualitatively investigated how collaborative reflective practices can improve novice and experienced EFL teachers' computer-assisted LAL. To achieve the goals of the present study, we adopted an instrumental case study to explore how EFL teachers perceive the benefits, applications, and challenges of computer-assisted language assessment practices and how teachers could improve their computer-assisted LAL through virtual CR. Four novice and experienced EFL teachers were selected purposively to explore internet-based CR through social media, observations, and interviews. The data were qualitatively analyzed through inductive thematic analysis. The results of the study demonstrated that computer-assisted language assessment practices enhanced the quality of formative assessment and assessment for learning, authenticity, and meaningfulness of tasks, administration, scoring, and interpretation, among others. Furthermore, EFL teachers experienced various personal, social, educational, and professional advantages of their computer-based language assessment practices through internet-based CR. The pedagogical implications of the study for the development of computer-assisted LAL of EFL in-service language teachers through online reflective practices are discussed.
- Published
- 2024
- Full Text
- View/download PDF
27. Developing a Dyslexia Diagnostic Team: A Feasibility Project
- Author
-
Kelly Farquharson, K. Brooke Ott, and Anne C. Re
- Abstract
Purpose: Dyslexia, a neurobiological phonological processing deficit, can be identified early; however, there is a substantial variation between and within states regarding who makes this diagnosis and when. Dyslexia evaluations are often challenging to obtain and very expensive for families who need to seek them outside of the school setting. The purpose of this study was to determine the feasibility of developing a free dyslexia diagnostic team within our university speech and hearing clinic. Method: We developed a team of academic and clinical faculty and students at the doctoral, master's, and undergraduate levels. We developed a 6-hr (1 day) testing battery and recruited families via social media. Children needed to be between the ages of 8 and 11 years and reported to have classroom difficulty related to word reading and/or spelling. Results: We were able to create a strong and successful team, testing battery, and recruitment plan. Master's students were interested in the opportunity and families drove between 246 and 453 miles to participate. We allocated enough time in our summer schedule for all parties. However, we have concerns about the sustainability of this program, especially during the academic year. Conclusions: Broadly, this dyslexia diagnostic team is a feasible endeavor. There was internal and external community interest. We identified small and solvable barriers related to debriefing and test interpretation. We also identified larger issues related to funding, faculty availability, and student support. [This paper will be published in "Perspectives of the ASHA Special Interest Groups."]
- Published
- 2024
- Full Text
- View/download PDF
28. Resistance to the Implementation of Continuous Assessment Learning Activities in Zimbabwean Secondary Schools: What and Why?
- Author
-
Simon Vurayai
- Abstract
This study employed the Systematic Review (SR) methodology to examine the content and reasons for resisting the implementation of Continuous Assessment Learning Activities (CALA) in Zimbabwean Secondary schools. The Overcoming Resistance to Change (ORC) model was exploited as the analytical lenses. The study found that factors such as education, training, communication, participation, cost, motivation and resources determined the acceptance or rejection of the implementation of CALA. The shortage or absence of these elements has resulted in intensified resistance to the implementation of CALA. This study therefore recommends sustained effort in raising the level of education, communication, training, participation and motivation teachers, parents, and learners. In addition, the government of Zimbabwe should increase its support through cost bearing and resources supply as required to meet the demands of CALA.
- Published
- 2024
- Full Text
- View/download PDF
29. Teaching or Testing, Which Matters More? The Transition among Education Levels in Turkey
- Author
-
Aksoy, Erdem
- Abstract
This study analyzes the alignment between the educational policy of Turkey and high-stakes tests administered for students transitioning from secondary to high school. Research questions focus on the opinions of secondary school teachers about the alignment between transit exam questions and curricula, course books and materials, and their views on high-stakes testing. The research used a survey study model utilizing the triangulation design. A total of 109 teachers from six different majors working in Ankara participated in the study. An online survey consisting of eight questions was used to get teachers' opinions. The research question was analyzed using quantitative (percentages) and qualitative (content analysis) methods. Results showed that education serves dominantly for tests emphasizing a testing-oriented education system in the current Turkish learning and teaching process, which contrasts with education policy documents targeting 2023.
- Published
- 2023
30. School-Day Administration of the ACT® Test: Removing Barriers and Opening Doors for All Students. ACT Research. Issue Brief
- Author
-
ACT, Inc., Allen, Jeff, Cruce, Ty, and Dingler, Colin
- Abstract
ACT is a nonprofit organization whose mission is to help people achieve education and workplace success. To help fulfill that mission, ACT offers school-day testing programs that provide all students with state- or district-funded access to its college readiness and admissions assessment, removing barriers to testing and opening doors to postsecondary opportunities. During the 2021-2022 academic year, over 1.3 million students, across all 50 states and the District of Columbia, participated in ACT school-day testing through state or district testing programs. Of these 1.3 million students, it is estimated that 59% could be considered members of underserved student populations according to at least one criterion. In this brief, the authors describe six evidence-backed benefits of school-day ACT testing: (1) Removes Barriers to Testing; (2) Increases the Number of Students Identified as College Ready; (3) Increases the Number of Students Who Are Contacted by Colleges; (4) Provides a Complete Picture for Research; (5) Leads to Improvement in College Enrollment; and (6) Provides Valuable Information to Students Navigating Different Paths.
- Published
- 2023
31. Applying a Contrasting Groups Standard Setting Methodology to a Large-Scale Performance Assessment Program Used for Accountability
- Author
-
Evans, Carla M.
- Abstract
Large-scale performance assessment programs are a longstanding reform tool. However, standard setting can be a challenge for assessment programs that use primarily non-standardized assessments. The purpose of this paper is to extend this field of research by explaining the standard setting methodology applied to one more recent instantiation of a state performance assessment program. The second purpose of this paper is to discuss the data quality control and quality assurance challenges experienced after five years of applying the standard setting method. Recognizing the burgeoning interest again in large-scale performance assessment programs, the goal and intended contribution of this paper is to inform future decisions about selecting appropriate standard setting methods and dealing with unanticipated challenges that may arise during implementation based upon the lessons learned from one program. It is likely that other large-scale performance assessment programs may face similar operational challenges, especially those that do not rely on standardized tests or standardized administration procedures to produce annual determinations of student proficiency or other scores used for accountability purposes. Assessment system designers can use the insights in this paper to consider standard setting methods and how those methods may need to be adapted to promote technical quality.
- Published
- 2023
32. Inter-Rater Reliability in Comprehensive Examination Scoring: The Case for Consistent and Collaborative Rater Training and Calibration
- Author
-
Saenz, David Arron
- Abstract
There is a vast body of literature documenting the positive impacts that rater training and calibration sessions have on inter-rater reliability as research indicates several factors including frequency and timing play crucial roles towards ensuring inter-rater reliability. Additionally, increasing amounts research indicate possible links in rater-reliability and grade inflation pertaining to faculty status and length of employment. This research study analyzed the impact that consistent and collaborative training and rubric calibration, or the lack thereof, has on inter-rater reliability. Additionally, this study investigated whether faculty status and faculty length of employment has an influence on scoring reliability. Finally, participatory action research was planned to assist in the creation of a formal training policy and processed that could then be tested to explore the impact that such training would have on future examination rater-reliability. Five years of examination scoring data was utilized from a small Midwest region private graduate school. Intraclass correlation coefficient statistical analysis was employed to determine inter-rater reliability along with independent samples t-test to determine statistical significance the faculty groups. Mean scoring differences outputs were then tested utilizing a Likert-type scale to evaluate scoring gaps amongst faculty. The findings within this study indicate that inter-rater reliability is negatively impacted when no formal or consistent training. However, no significant differences between the mean scores were found based on faculty status nor between faculty length of employment. The hypothesis testing yielded support of the main hypothesis within this research study and the current literature in which inter-rater reliability is negatively impacted when no formal or consistent rater-training and rubric calibrations are performed for raters of examinations. The implications for practice resulting from this study display the need for formal training policy and processes be created, and consistently completed to ensure inter-rater reliability will be positively impacted.
- Published
- 2023
33. An Introduction to Considerations for Through-Year Assessment Programs: Purposes, Design, Development, Evaluation
- Author
-
Smarter Balanced Assessment Consortium, Dadey, Nathan, and Gong, Brian
- Abstract
This document is written primarily for policy makers and state department of education staff who are considering through-year assessments, as well as consultants and contractors state departments rely on. The document identifies essential things to consider when designing or evaluating a through-year assessment program. The paper is organized into five sections. The first section provides a definition of through-year assessment, the main motivations and purposes for through-year assessments, and the tools for specifying an assessment design, including theories of action, claims, and validity arguments. The second section describes key design aspects that every through-year assessment program must address, and some options for those design aspects that distinguish through-year models. The third section discusses emerging examples of specific through-year assessment designs in terms of their design choices, challenges, and trade-offs. The fourth section provides suggestions for evaluating through-year assessment programs that go beyond current evaluation requirements for state summative assessments, such as federal Peer Review. The fifth and final section provides conclusions and a view to the future.
- Published
- 2023
34. Spring 2023 NSCAS Growth: ELA, Mathematics, and Science Technical Report
- Author
-
Nebraska Department of Education and NWEA
- Abstract
In Fall and Winter 2022-2023, the NSCAS assessments were administered in ELA and mathematics for grades 3-8. In Spring 2022-2023, the NSCAS assessments were administered in English language arts (ELA) and mathematics for grades 3-8 and in science for grades 5 and 8. The purposes of the NSCAS assessments are to measure and report Nebraska students' depth of achievement regarding the Nebraska College and Career Ready Standards; to determine if student achievement demonstrates sufficient academic proficiency to be on track for achieving college readiness; to measure students' annual progress toward college and career readiness; to inform teachers how student thinking differs along different areas of the scale, as represented by the range achievement level descriptors (RALDs), as information to support instructional planning; and to assess students' construct-relevant achievement in ELA, mathematics, and science for all students and subgroups of students. This technical report documents the processes and procedures implemented to support the 2022-2023 Nebraska Student-Centered Assessment System (NSCAS) Growth in English language arts (ELA), mathematics, and science assessments by NWEA® under the supervision of the Nebraska Department of Education (NDE). The technical report shows how the processes, methods applied, and results relate to the issues of validity and reliability and to the "Standards for Educational and Psychological Testing" (AERA et al., 2014).
- Published
- 2023
35. 2023-2024 Florida Adult Education Assessment Technical Assistance Paper
- Author
-
Florida Department of Education, Division of Career and Adult Education and Kevin O’Farrell
- Abstract
This technical assistance paper provides policy and guidance to individuals with test administration responsibilities in adult education programs. The Florida assessment policies and guidelines presented in this technical assistance paper are appropriate for state and federal reporting. Therefore, guidance and procedures regarding the selection and use of appropriate student assessment are included. The following important information for adult education programs is provided: (1) Definition of key terms and acronyms; (2) Selection of appropriate assessments by student and program type; (3) Appropriate student placement into program and instructional level; (4) Verification of student learning gains, EFL, and/or program completion; (5) Accommodation for students with disabilities and other special needs; (6) Assessment procedures for Distance Education; and (7) Training for all staff who administer the standardized assessments.
- Published
- 2023
36. Effects of Resit Exams on Student Progression: An Australian Case Study
- Author
-
Nguyen, Nga Thanh, Clark, Colin, and Juriansz, John
- Abstract
Resit exams allow students who have failed a subject a second chance to demonstrate achievement of the academic standards required for program progression. Contrary to previous studies in this field, this paper reports on the value of resit exams with a comprehensive discussion of the lessons learned from the implementation of resits in different conditions and is intended to assist educators to decide whether such exams are a useful and fair way to promote student progression. The data were obtained from student academic performance metrics, a survey with a total of 444 students, six student focus group interviews with a total of 29 students, and three individual staff interviews. The study suggests the benefits of resit exams as a tool to improve student learning outcomes and progression, especially with the close relationship between resit exams and threshold and high-stakes assessment requirements. While the psychological and progression benefits of resit exams were acknowledged by the participants, concerns were expressed about some aspects of the way the school offered the resits and the issue of inequality. Alternatives were proposed to avoid unnecessary failure and promote learning.
- Published
- 2023
37. Planning an Online Assessment Course for English Language Teachers in Latin America
- Author
-
Giraldo, Frank and Yan, Xun
- Abstract
In this article, we report the results of a study through which we collected English language teachers' needs and wants to design an online language assessment course. Through a mixed-methods approach, we asked 20 teachers from four Latin American countries what they wanted to learn in the course. The teachers wanted a course in which they could address the challenges they faced in assessment; discuss and develop new ways to assess; and learn about authentic, valid, and ethical assessment. Therefore, the findings suggest that the teachers wanted a course that mixed theory, practice, and principles of assessment. Additionally, the course should address emerging topics in English language assessment, namely bilingual assessment and the assessment of learners with special educational needs.
- Published
- 2023
38. Factors Affecting Cheating Behavior at Quetta- Pakistan
- Author
-
Rahman, Muneeb Ur, Hamza, Aadersh, and Rehman, Abdul
- Abstract
Cheating in examinations across the globe is an issue of growing concern. This study argues that the culture of cheating in exams in Balochistan has reduced the efficiency of human resource and has resulted in producing students with high qualifications but less potential in the province. Therefore, it is very important to explore the factors contributing to cheating in exams and to suggest a course of action to mitigate the menace it poses. A quantitative research method was employed for this study. Similarly, statistical techniques such as correlation and logistic regression were conducted for statistical analysis and explanation of the study using SPSS. Theoretically, the study is inspired by rational choice theory. The findings of the study show a significant relationship between personal, institutional, and situational factors. Personal factors that contribute to cheating were found to be students' desire to excel, low GPA, and slow learning. Institutional factors include the weak administrative role of institutions, poor academic policies, overload on students, and weak performance of the teachers. Situational factors such as the poor management strategy of the examiner, time pressure, and technological tools were found to be instigating students towards cheating. The study suggests that an appropriate exam hall setting, effective monitoring, concept-oriented exams, and a strong honor code can decrease cheating.
- Published
- 2023
39. Student Perceptions of Online Examinations as an Emergency Measure during COVID-19
- Author
-
Biccard, Piera, Mudau, Patience Kelebogile, and van den Berg, Geesje
- Abstract
This article explores student perceptions of writing online examinations for the first time during the COVID-19 pandemic. Prior to the pandemic, examinations at an open and distance learning institution in South Africa were conducted as venue-based examinations. From March 2020, all examinations were moved online. Online examinations were introduced as an emergency measure to adhere to safety and health protocols. Although students in developed countries have indicated benefits to online examinations, less is known about students living in the Global South when it comes to writing examinations online. Not enough is known about the benefits and challenges of online examinations since they were implemented as an emergency measure. We aimed at exploring student perceptions of writing online examinations for the first time, to improve examination processes by including student views. Through an analysis of 336 written responses to an open-ended question posed at the end of an online survey, we established that digital access, duration of the examination, and the examination system interface affected students' success in online examinations. Based on the findings, we recommend that students need to be given tools and data to participate in online examinations. Furthermore, students should be granted ample opportunity to practise writing online examinations while receiving the necessary support.
- Published
- 2023
40. Examining the Achievement Test Development Process in the Educational Studies
- Author
-
Sahin, Melek Gülsah, Yildirim, Yildiz, and Boztunç Öztürk, Nagihan
- Abstract
Literature review shows that the development process of an achievement test is mainly investigated in dissertations. Moreover, preparing a form that will shed light on developing an achievement test is expected to guide those who will administer the test. In this line, the current study aims to create an "Achievement Test Development Process Control Form" and investigate the achievement tests for Maths based on this form. Document analysis was conducted within the framework of qualitative research and was done based on descriptive analysis. Within the scope of the research, 1683 articles published in designated journals between 2015-2020 were reviewed. It was determined that a mathematics achievement test was developed in 39 of these articles, which were coded on the control form. The articles that were included in the scope of the current study were investigated in terms of the type of items used in the tests, the theory or practice on which the test was developed, the use of rubric for open-ended items, the number of items in the pilot and final form, features of the test form as well as those pertaining to the table of specifications, the features of item pool, the evaluation of pilot testing, the evaluation of real study, test validity and reliability, and the setting in which tests were administered. The current study findings show that mostly an item pool was not prepared, the pilot application was not conducted or was not specified, and even if it was conducted, item analysis was not performed, test forms or example items were not included in the articles, and there were some deficiencies regarding validity. On the other hand, it was clear that the articles mostly specified the test goal and reported the reliability coefficient. In light of the current findings, some suggestions are provided for test developers and those who will administer these tests.
- Published
- 2023
41. Educational Accountability and Equity: Superintendent Perspectives
- Author
-
Decman, John, Grace, Jennier, Simieou, Felix, III, and Miller, Queinnise
- Abstract
Educational equity is understood as the recognition of a school system to ensure resources to safeguard that all students have equitable access, opportunity, and outcomes (Galloway & Ishimaru, 2015). Yet inequity persists in the American educational system. School accountability remains at the forefront of education policy to ensure equitable achievement between students from all backgrounds regardless of race, ethnicity, family income, linguistic background, and ability (Krejsler, 2018; Skrla, 2001). This article reflects a qualitative approach to understanding public school superintendent voices regarding experiences, feelings, and beliefs related to our ongoing era of accountability in a changing social environment. This study examines the results of interviews of 13 public school superintendents in a large metropolitan area and identifies emergent themes in superintendent thinking as it revolves around school accountability. These themes are couched in larger discussion of educational equity.
- Published
- 2023
42. Comparison of Kernel Equating Methods under NEAT and NEC Designs
- Author
-
Ozsoy, Seyma Nur and Kilmen, Sevilay
- Abstract
In this study, Kernel test equating methods were compared under NEAT and NEC designs. In NEAT design, Kernel post-stratification and chain equating methods taking into account optimal and large bandwidths were compared. In the NEC design, gender and/or computer/tablet use was considered as a covariate, and Kernel test equating methods were performed by using these covariates and considering bandwidths. The study shows that, in the NEAT design, Kernel chain equating methods exhibit higher error than the post-stratification equating methods do since the lowest error in the NEC design was obtained from the Kernel equating method with large bandwidth through the computer/tablet variable. Kernel test equating results based on the NEC design, which considers gender and computer tablet use variables as a covariate separately, showed lower SEE than that of the NEC pattern, which takes these variables together as covariates. In terms of the bandwidth, when all methods are compared within the pattern used (i.e., NEAT and NEC), it has been seen that generally Kernel test equating with large bandwidth results in fewer errors than the Kernel test equating with optimal bandwidth. When the NEAT and NEC designs are compared generally, the NEAT design has a lower SEE than that of the NEC design.
- Published
- 2023
43. Assessment in England at a Crossroads: Which Way Should We Go?
- Author
-
Leech, Tony
- Abstract
Assessment policy in England is often of public significance. Assessments, especially GCSEs, A levels and their vocational equivalents, have significant stakes for candidates and wider society (including for school accountability and for selection to higher education). Such assessments are frequently critiqued. There has been little major (intended) assessment reform since early 2010s developments under Michael Gove. However, the COVID-19 pandemic has upended previous certainties about assessment. Consequently, a number of reports from educationalists and think tanks into how things might be done differently in the future have been published since 2020. In this article, I overview these reports, explore similarities and differences, and assess the policy changes they recommend. The ideas are explored in relation to four overarching themes: high stakes assessment at 16, the use of online or digital assessment, the number of subjects studied in each phase, and the relationship of academic to vocational study.
- Published
- 2023
44. What Challenges Emerge When Students Engage with Algorithmatizing Tasks?
- Author
-
Tupouniua, John Griffith
- Abstract
A critical part of supporting the development of students' algorithmic thinking is understanding the challenges that emerge when students engage with algorithmatizing tasks--tasks that require the creation of an algorithm. Knowledge of these challenges can serve as a basis upon which educators can build effective strategies for enhancing students' algorithmic thinking skills. This paper presents three illustrative cases of emergent challenges evident as students grapple with the process of creating an algorithm. The first challenge highlights discrepancies between the method with which students solve a problem and the algorithm they create, and claim would, when implemented, solve the same problem. The second challenge pertains to the persistence of students' normatively incorrect algorithms, despite going through multiple iterations of testing and revising. Finally, the third challenge concerns issues around the use of test problems for supporting students in their creation of generalized algorithms. These three challenges are discussed using student data (illustrative cases) from three different mathematical algorithmatizing tasks. Suggestions are put forth for addressing some of these challenges, with a particular emphasis on practical pedagogical suggestions for cultivating students' mathematical thinking in the context of algorithmatizing tasks.
- Published
- 2023
45. 2022-2023 Indiana Assessments Policy Manual
- Author
-
Indiana Department of Education
- Abstract
The 2022-2023 Indiana Assessments Policy Manual serves as the foundation for established guidelines regarding appropriate test administration in Indiana for key stakeholders including educators and Test Coordinators. The following document contains policy guidance and contains appendices which pertain to specific aspects of test implementation, including test security protocol, reporting, and monitoring. The information in the Indiana Assessments Policy Manual applies to all state-required assessments, including ILEARN, I AM, Indiana SAT, IREAD-3, NAEP, and WIDA, unless otherwise noted. In addition, "corporation" includes traditional public schools, public charter schools, accredited non-public schools, and Choice schools, unless otherwise noted. All documents should be reviewed thoroughly to facilitate quick access to information during test administration. The Indiana Department of Education (IDOE) publishes a separate 2022-2023 Accessibility and Accommodations Information for Statewide Assessments document (ED626977) to further delineate policy for specific student needs regarding accommodations and additional processes during testing. General information is included in this manual, but specific guidance related to student needs is thoroughly addressed in the supplemental appendices and supporting documents.
- Published
- 2023
46. NAEP 2023 Facts for Teachers. Grades 4, 8, and 12 Field Test
- Author
-
National Center for Education Statistics (NCES) (ED/IES) and Hager Sharp
- Abstract
The National Assessment of Educational Progress (NAEP) is an integral measure of academic progress over time. It is the largest nationally representative and continuing assessment of what our nation's students know and can do in various subjects such as mathematics, reading, and science. The program also provides valuable insights into students' educational experiences and opportunities to learn in and outside of the classroom. Elected officials, policymakers, and educators all use NAEP results to develop ways to improve education. The NAEP 2023 field test will explore an online assessment platform and transition to different devices, such as Chromebooks, that may be more familiar to students. By participating in the field test, schools will help ensure that upcoming NAEP digitally based assessments continue to be a reliable, meaningful, and efficient measure of student achievement. This brief document highlights the NAEP 2023 field test, teachers as essential partners in NAEP, and recent NAEP results. [For "NAEP 2022 Facts for Teachers," see ED627645.]
- Published
- 2023
47. New York State Grade 8 Intermediate-Level Science Test. Manual for Administrators and Teachers, 2023. Written Test [and] Performance Test, Form A
- Author
-
New York State Education Department
- Abstract
The Regulations of the Commissioner of Education provide that an intermediate-level science test is to be administered in Grade 8 to serve as a basis for determining students' need for academic intervention services in science. The New York State Grade 8 Intermediate-Level Science Test consists of two required components: a Written Test and a Performance Test. The Written Test consists of multiple-choice and open-ended questions. The Performance Test, Form A, consists of hands-on tasks set up at three stations. The first section of this manual contains information of special interest to administrators. Subsequent sections contain information on test preparations and other guidelines along with directions for administering and scoring the Written and Performance Tests. [For the 2022 manual, see ED628917.]
- Published
- 2023
48. The Effects of Exam Setting on Students' Test-Taking Behaviors and Performances: Proctored versus Unproctored
- Author
-
Denizer Yildirim, Hale Ilgaz, Alper Bayazit, and Gökhan Akçapinar
- Abstract
One of the biggest challenges for online learning is upholding academic integrity in online assessments. In particular, institutions and faculties attach importance to exam security and academic dishonesty in the online learning process. The aim of this study was to compare the test-taking behaviors and academic achievements of students in proctored and unproctored online exam environments. The log records of students in proctored and unproctored online exam environments were compared using visualization and log analysis methods. The results showed that while a significant difference was found between time spent on the first question on the exam, total time spent on the exam, and the mean and median times spent on each question, there was no significant difference between the exam scores of students in proctored and unproctored groups. In other words, it has been observed that reliable exams can be conducted without the need for proctoring through an appropriate assessment design (e.g., using multiple low-stake formative exams instead of a single high-stake summative exam). The results will guide instructors in designing assessments for their online courses. It is also expected to help researchers in how exam logs can be analyzed and in extracting insights regarding students' exam-taking behaviors from the logs.
- Published
- 2023
49. Student Preferences for Multiple Attempts and Feedback on Online Quantitative Assessments
- Author
-
Vitaly Brazhkin and Joshua K. Strakos
- Abstract
Research on multi-attempt online assessments is sparse and inconclusive and lacks the voice of students. To help bridge the gap, this paper analyzes student survey data across multiple supply chain management classes. The results show that students prefer three attempts on quantitative assessments. The preferences do not depend on age, gender or GPA. Other findings indicate that students favor concrete feedback over abstract. Rather than have the correct answer given away, students prefer the type of feedback that allows them to solve the problem on their own. This study helps pave the way to better understanding of effectiveness of and student satisfaction with different assessment settings of online assessments of quantitative assignments.
- Published
- 2023
50. The Influence of Two Stage Collaborative Testing on Peer Relationships: A Study of First Year University Student Perceptions
- Author
-
Brian Rempel, Elizabeth McGinitie, and Maria Dirks
- Abstract
Two-stage testing is a form of collaborative assessment that creates an active learning environment during test taking. In two-stage testing, students first complete an exam individually, and then complete a subset of the same questions as part of a learning team with the ultimate exam score being a weighted average of the individual and team portions. In the second (team-based) part of the exam, students are encouraged to discuss solutions until a consensus among team members is achieved, thus actively engaging students with course material and each other during the exam. A short open-ended survey was administered to students at the end of the semester, and the responses coded by thematic analysis, with themes generated using inductive coding based on the principles of grounded theory. The most important conclusion was that students overwhelmingly preferred two-stage tests for the development of positive peer relationships in class. The most common themes that emerged from student responses involved positive feelings from forced interaction with their peers, the benefits of meeting and socializing with other students, sharing of knowledge with others, and solidarity or positive affect towards the process of working as part of a team. Finally, students also expressed an overall preference for two-stage exams when compared to solely individual, one-stage exams.
- Published
- 2023
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.