Ensuring that speech pathology students are sufficiently competent to practise their profession is of critical importance to the speech pathology profession, students, their future employers, and clients/patients. This thesis describes the development and validation of a competency based assessment of speech pathology students’ performance in the workplace and their readiness to enter the profession. Development involved an extensive literature review regarding the nature of competency and its relationship to professional practice, the purpose and nature of assessment, and the validation of performance assessments. An online and hard copy assessment tool (paper) was designed through integrating multiple sources of information regarding speech pathology and assessment of workplace performance. Sources included research, theory, expert opinion, current practice, and focus group consultations with clinical educators and speech pathology students. The resulting assessment tool and resource material included four generic components of competency (clinical reasoning, professional communication, lifelong learning, and professional role) and seven occupational competencies previously developed by the speech pathology profession. The tool comprised an assessment format, either in a booklet or online, for clinical educators to rate students’ performances on the competencies at mid and end placement using a visual analogue scale. Behavioural descriptors and an assessment resource booklet informed and supported clinical educators’ judgement. The validity of the assessment tool was evaluated through a national field trial and using Messick’s six interrelated validity criteria which address content, substantive, structural, generalisability, external, and consequential aspects of validity (Messick, 1996). The validity of the assessment tool and its use with speech pathology students was evaluated through Rasch analysis, parametric statistical evaluation of relationships existing between information yielded by the Rasch analysis and other factors, and student and clinical educator feedback. The assessment tool was found to have strong validity characteristics across all validity components. Item Fit statistics generated through Rasch analysis ranged from .81 to 1.17 strongly upholding that the assessment items sampled a unidimensional construct of workplace competency for speech pathology students and confirming that generic and occupational competencies are both necessary for competent practice of speech pathology. High Item and Person Reliabilities (analogous to Cronbach’s alpha) were found (.98 and .97 respectively) and a wide range of person measures (-14.2 to 13.1) were generated. This indicated that a large spread of ability and a clear hierarchy of development on the construct was identified and that the assessment tool was highly reliable. This was further confirmed by high Intra Class Correlation coefficients for a small group of paired clinical educators rating the same student in the same workplace (.87) or in different workplaces concurrently (.82). Rasch analysis of the visual analogue scale used to rate student performance on 11 items of competence identified that clinical educators were able to reliably discriminate 7 categories or levels of student performance. This, in combination with careful calibration procedures, has resulted in an assessment tool that Australian Speech Pathology pre-professional preparation programs can use with confidence to place their students’ level of workplace competence into 7 zones of competency, with the seventh representing sufficient competence to enter the profession. The assessment tool also showed strong potential for identifying marginal students and for future use in promoting quality teaching and learning of professional competence. Limitations to the research and the tool validity were discussed, and recommendations made regarding future research. First, the clinical educator, who has dual and possibly conflicting roles as facilitator and assessor of student learning, made the assessment. Second, situating the assessment in the real workplace limits the students’ opportunities to demonstrate competence to those that naturally arise in the workplace. Paradoxically, both these factors also contributed to the validity of the assessment tool. It was recommended that the assessment tool be revised on the basis of the information gathered from the field trial, that further data be collected to ensure a broader proportional representation of speech pathology programs, to investigate possible threats to validity as well as those areas for which the tool showed promise. This research developed the first prototype of a validated assessment of entry level speech pathology competence that is grounded in a unified theoretical conception of entry level competence to the profession of speech pathology and the developmental progression required to reach this competence. This research will assist the profession of speech pathology by ensuring that speech pathologists enter the workplace well equipped to provide quality care to their future clients, the ultimate goal of any professional preparation program. Messick, S. (1996). Validity of performance assessments. In G. W. Phillips (Ed.), Technical Issues in Large-Scale Performance Assessment (pp. 1-18). Washington: National Centre for Education Statistics.