1. Using Scientific Abstracts to Measure Learning Outcomes in the Biological Sciences †
- Author
-
William C. Wolf, Patrick L. Hindmarsh, Rebecca Giorno, Jeff Shultz, and Jeffrey V. Yule
- Subjects
Computer science ,QH301-705.5 ,Tips & Tools ,Rote learning ,Scientific literature ,Bioinformatics ,General Biochemistry, Genetics and Molecular Biology ,multiple-choice assessment ,Education ,Direct measure ,ComputingMilieux_COMPUTERSANDEDUCATION ,critical thinking ,Biology (General) ,lcsh:QH301-705.5 ,Competence (human resources) ,Biological sciences ,lcsh:LC8-6691 ,Medical education ,lcsh:Special aspects of education ,General Immunology and Microbiology ,LC8-6691 ,Special aspects of education ,Scientific literacy ,lcsh:Biology (General) ,Critical thinking skills ,Critical thinking ,scientific abstract ,General Agricultural and Biological Sciences ,scientific literacy - Abstract
Educators must often measure the effectiveness of their instruction. We designed, developed, and preliminarily evaluated a multiple-choice assessment tool that requires students to apply what they have learned to evaluate scientific abstracts. This examination methodology offers the flexibility to both challenge students in specific subject areas and develop the critical thinking skills upper-level classes and research require. Although students do not create an end product (performance), they must demonstrate proficiency in a specific skill that scientists use on a regular basis: critically evaluating scientific literature via abstract analysis, a direct measure of scientific literacy. Scientific abstracts from peer-reviewed research articles lend themselves to in-class testing, since they are typically 250 words or less in length, and their analysis requires skills beyond rote memorization. To address the effectiveness of particular courses, in five different upper-level courses (Ecology, Genetics, Virology, Pathology, and Microbiology) we performed pre- and postcourse assessments to determine whether students were developing subject area competence and if abstract-based testing was a viable instructional strategy. Assessment should cover all levels in Bloom’s hierarchy, which can be accomplished via multiple-choice questions (2). We hypothesized that by comparing the mean scores of pre- and posttest exams designed to address specific tiers of Bloom’s taxonomy, we could evaluate the effectiveness of a course in preparing students to demonstrate subject area competence. We also sought to develop general guidelines for preparing such tests and methods to identify test- and course-specific problems.
- Published
- 2013