Testing Inter-Rater Reliability in Rubrics for Large Scale Undergraduate Independent Projects

Proceedings of the Canadian Engineering Education Association (CEEA)(2017)

Cited 1|Views0
No score
Abstract
This work outlines the process of testinginter-rater reliability in rubrics for large scaleundergraduate independent projects; more specifically,the thesis program within the Division of EngineeringScience at the University of Toronto, in which 200students work with over 100 supervisors on anindependent research project. Over the last few years,rubrics have been developed to both guide the students inthe creation of their thesis deliverables, and to improvethe consistency of supervisor assessment. To examineinter-rater reliability, 12 final thesis reports wereassessed using the course rubric by the two generalistexperts, who have worked extensively with the thesiscourse and designed the rubrics, alongside the projectsupervisor. We found substantial agreement between thetwo generalist experts, but only fair agreement betweenthe generalist experts and the supervisors, suggesting thatwhile the rubric does help towards developing a commonset of expectations, there may be other aspects of thesupervisor’s assessment practice that need to beconsidered.
More
Translated text
Key words
reliability,rubrics,testing,inter-rater
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined