Evaluating the effect of assessment form design on the quality of feedback in one Canadian ophthalmology residency program as an early adopter of CBME.

Canadian journal of ophthalmology. Journal canadien d'ophtalmologie(2023)

引用 0|浏览5
暂无评分
摘要
The initial findings of this research study were presented in oral presentation format at the Annual Meeting of the Canadian Ophthalmological Society in Halifax, Nova Scotia, June 9–12, 2022. At the core of competency-based medical education (CBME) is the philosophy of frequent assessment that is robust, continuous, and of excellent quality.1Holmboe ES Sherbino J Long DM Swing SR Frank JR International CBME Collaborators. The role of assessment in competency-based medical education.Med Teach. 2010; 32: 676-682Crossref PubMed Scopus (510) Google Scholar Ideally, narrative comments via electronic forms help to facilitate both a written record of the resident's performance at a specific task and coaching to encourage improvement.1Holmboe ES Sherbino J Long DM Swing SR Frank JR International CBME Collaborators. The role of assessment in competency-based medical education.Med Teach. 2010; 32: 676-682Crossref PubMed Scopus (510) Google Scholar2Lockyer J Carraccio C Chan MK et al.Core principles of assessment in competency-based medical education.Med Teach. 2017; 39: 609-616Crossref PubMed Scopus (227) Google Scholar This study explored whether the inherent structure of the assessment form used had an effect on the quality of written feedback as measured using the Quality of Assessment of Learning (QuAL) score.3Chan TM Sebok-Syer SS Sampson C Monteiro S. The Quality of Assessment of Learning (QuAL) score: validity evidence for a scoring system aimed at rating short, workplace-based comments on trainee performance.Teach Learn Med. 2020; 32: 319-329Crossref PubMed Scopus (10) Google Scholar Both a Canadian Ophthalmology Assessment Tool for Surgery (COATS) procedural assessment form (an ophthalmology-specific adaptation of the Ottawa Surgical Competency Operating Room Evaluation [O-SCORE; Hurley B, O'Connor M, University of Ottawa Department of Ophthalmology, personal communication, August 2018])4Gofton WT Dudek NL Wood TJ Balaa F Hamstra SJ. The Ottawa surgical competency operating room evaluation (O-SCORE): a tool to assess surgical competence.Acad Med. 2012; 87: 1401-1407Crossref PubMed Scopus (194) Google Scholar and the Entrustable Professional Activity (EPA) form are routinely used at our centre but prompt feedback using different wording. The development of EPAs used for each stage of training and the forms used to capture this procedural and clinical data were created by the Department of Ophthalmology program director and an educational consultant based on Royal College Objectives of Training and the CanMeds Milestones Guide; input from faculty members, residents, and other shareholders is constantly integrated into this adaptive and dynamic assessment system.5Braund H Dalgarno N McEwen L Egan R Reid MA Baxter S. Involving ophthalmology departmental stakeholders in developing workplace-based assessment tools.Can J Ophthalmol. 2019; 54: 590-600Abstract Full Text Full Text PDF PubMed Scopus (5) Google Scholar Following institutional ethics approval (TRAQ 6029081), all available assessment data was retrieved from Elentra, Queen's University's integrated teaching and learning platform, from July 2017 to December 2020 and organized by form type (Table 1), anonymized, and coded with a unique identifier for evaluator and target resident. Written feedback was assigned a QuAL score out of 5 based on the previously validated rubric (Table 2). All individual assessments were blindly scored by an ophthalmology faculty member, and a randomized sample of 10% was independently rescored by an ophthalmology resident to ensure inter-rater reliability.Table 1Assessment forms included in the analysisForm typeCBME stageForm numberFeedback promptsEPATransition to disciplineD1, 2, 3, 4.1,*Two versions of D4 were included, with 4.2 being a revised version of 4.1. “Form number” refers to the specific form used to evaluate a single EPA within a CBME stage. For example, C1 evaluates a resident's ability to “Assess and diagnose subspecialty patients with clinical presentations expected in general practice.” 4.2,*Two versions of D4 were included, with 4.2 being a revised version of 4.1. “Form number” refers to the specific form used to evaluate a single EPA within a CBME stage. For example, C1 evaluates a resident's ability to “Assess and diagnose subspecialty patients with clinical presentations expected in general practice.” 5, 6, 7, 8, 9“Next steps”“Global feedback”Fundamentals of disciplineF1, 2, 3, 4, 5, 6, 7, 8, 10, 11Core of disciplineC1, 2, 3, 4, 5, 6, 7, 8, 9, 10COATSAll stagesN/A“Give at least 1 specific aspect of the procedure done well”“Give at least 1 specific suggestion for improvement”“Global feedback”COATS, Canadian Ophthalmology Assessment Tool for Surgery; EPA, Entrustable Professional Activity Two versions of D4 were included, with 4.2 being a revised version of 4.1. “Form number” refers to the specific form used to evaluate a single EPA within a CBME stage. For example, C1 evaluates a resident's ability to “Assess and diagnose subspecialty patients with clinical presentations expected in general practice.” Open table in a new tab Table 2Quality of Assessment of Learning (QuAL) score components as adapted from Chan et al. 20203Chan TM Sebok-Syer SS Sampson C Monteiro S. The Quality of Assessment of Learning (QuAL) score: validity evidence for a scoring system aimed at rating short, workplace-based comments on trainee performance.Teach Learn Med. 2020; 32: 319-329Crossref PubMed Scopus (10) Google ScholarEvidenceDoes the rater provide sufficient evidence about resident performance?1-No comment at all2-No, but comment present3-Somewhat4-Yes/full descriptionSuggestionDoes the rater provide a suggestion for improvement?1-No2-YesConnectionIs the rater's suggestion linked to the behaviour described?1-No2-Yes Open table in a new tab COATS, Canadian Ophthalmology Assessment Tool for Surgery; EPA, Entrustable Professional Activity In total, 2617 individual assessments were graded from 20 different residents spanning postgraduate training years 1–5. The intraclass correlation coefficient (ICC) for the 2 independent graders was excellent at 0.90 (95% CI, 0.88–0.92; p < 0.001) for the total QuAL scores6Koo TK Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research.J Chiropract Med. 2016; 15: 155-163Crossref PubMed Scopus (11300) Google Scholar COATS forms (n = 483) demonstrated a significantly higher total QuAL score (out of 5) versus EPA forms (n = 2134), with mean values of 3.48 versus 2.26 (p < 0.001). COATS forms also demonstrated a significantly higher “evidence of performance” score (out of 3) versus EPA forms, with mean values of 2.26 versus 1.60 (p < 0.001). Full marks were given (5 of 5 score) to 49.7% (n = 240) of COATS forms versus 16.5% (n = 353) of EPA forms (p < 0.001). EPA forms were designed for all stages of training to address key competencies in ophthalmology across a broad range of observed activities. However, instead of the EPA form that includes fillable text boxes with the prompts “Next steps” and “Global assessment,” COATS forms were used to assess only procedural competencies.4Gofton WT Dudek NL Wood TJ Balaa F Hamstra SJ. The Ottawa surgical competency operating room evaluation (O-SCORE): a tool to assess surgical competence.Acad Med. 2012; 87: 1401-1407Crossref PubMed Scopus (194) Google Scholar These forms include more specific prompts, asking the evaluator to identify 1 aspect of the procedure done well in addition to an element that could be improved, similar to the “stop, start, continue” method.7Hoon A Oliver E Szpakowska K Newton P. Use of the “stop, start, continue” method is associated with the production of constructive qualitative feedback by students in higher education.Assess Eval Higher Ed. 2015; 40: 755-767Crossref Scopus (38) Google Scholar As such, COATS forms are inherently more targeted and mesh into the existing scaffold of the QuAL score, discouraging general comments such as “Great work.” Intuitively, we hypothesized that they would yield higher QuAL scores. In fact, the mean value of the COATS forms total score was 3.48 versus 2.26 for EPA forms (p < 0.001). Perhaps more illustrative of the benefit of this type of form is that full marks were given (5 of 5 score) to significantly more COATS forms versus EPA forms. We suggest that these findings are likely due to the clear direction for the text fields in these forms versus the more nebulous “Next steps” in the EPA forms. However, because the COATS forms were used exclusively for procedures, it is also possible that procedural feedback is easier to provide and naturally lends itself better to constructive narrative feedback. Through use of the QuAL score in evaluating narrative feedback, this study demonstrates that procedural COATS forms with structured feedback prompts may be more effective in guiding evaluators to deliver comments on resident performance. The authors have no proprietary or commercial interest in any materials discussed in this article.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要