Critical Reviews in Health Professions Education Research.

Journal of graduate medical education(2023)

引用 3|浏览2
暂无评分
摘要
Health professions education (HPE) has been framed as a field that is not entirely theoretical or practical, as well as one that is not constrained by the worldviews of a single discipline.1 As such, HPE scholars often need to synthesize knowledge from diverse disciplines or theoretical perspectives to advance thinking about difficult problems. As a result, critical reviews have a robust and valuable history in HPE. Such reviews are methodologically flexible, which enables scholars to advance understanding of complex issues by appraising theory and evidence from an array of sources, rather than prioritizing systematic reporting of everything written within a single discipline.Within the taxonomy of literature reviews,2 critical reviews fall under the broad umbrella of narrative reviews.3 A key feature that often distinguishes critical reviews from other narrative reviews is that they draw on literature and theory from different domains, which enables investigators to reenvision ways of interpreting a problem. Those domains can include multiple disciplines, such as when the fields of psychology, organizational behavior, and behavioral economics were used to help rethink the role of incentives in recruiting and retaining medical clinician educators.4 Critical reviews can pertain to a specific theory, such as when conversation analysis theory was used to offer a new perspective on the patient-doctor relationship.5 Or they can be built around a particular empirical finding, such as when patients' priorities for clinical communication were found to not match assumptions about “good” communication.6 Authors of critical reviews bring an interpretive lens to bear on knowledge synthesis, either through their methods (by designing their review from a specific orientation or theoretical perspective) or analyses (through the development of a new perspective about the focal problem). Thus, in critical reviews, researchers act as research instruments by using their perspectives to appraise and interpret the literature uncovered, rather than primarily acting to describe or summarize it. For this reason, critical reviews are particularly useful for problems that may require a new way of thinking or that require reviewers to use their unique expertise and judgement to take a stance on the information uncovered and where the field ought to go as a result (Box).An additional distinction is that many forms of narrative review focus on exploring how a relatively defined topic has been addressed within a single literature (eg, burnout in medical education7 or the learning environments experienced by underrepresented minority medical students).8 In contrast, critical reviewers most often work across multiple disciplines to explore whether each provides unique explanatory value and if comparison between them generates new insights. As an example, Ilgen et al9 aimed to “define and elaborate the concept of ‘comfort with uncertainty'… in clinical settings by juxtaposing a variety of frameworks and theories in ways that generate more deliberate ways of thinking about, and researching, this phenomenon.” We argue that HPE research has benefited substantially from such engagement with various lenses, by generating insights into multifaceted problems that are unlikely to have simple solutions.10Despite the strength of alignment between critical reviews and the complex problems that drive the HPE field, limited methodological guidance is available, and reporting is highly variable. That state leaves researchers, reviewers, and readers with more questions than answers regarding best practices.11 To fill this gap, we offer an overview and practical guidance by drawing on existing methodological literature, a scan of recently published critical reviews in HPE journals, and our own experiences reading, writing, and reviewing critical reviews. We began by examining 19 articles that stated a “critical review” methodology and were published within the past 10 years in 4 HPE journals with the highest impact factors: Academic Medicine, Medical Education, Advances in Health Sciences Education, and Medical Teacher. We examined introductions and methods sections to extract and compare authors' stated intents and reported procedures. To offer best practice advice for those reading and conducting critical reviews, we then integrated our findings with the limited methodological literature about critical reviews in HPE and other relevant fields. Modeling the goal of critical reviews, our discussion extends beyond reporting “how others have done it” to offer an argument, grounded in the literature, regarding why certain features or strategies should take precedence. In doing so, we sought to offer best practices on critical review design while maintaining the flexibility needed to tailor these reviews to research questions that do not fit well within more structured methods of knowledge synthesis.Authors of critical reviews generally adopt a constructivist stance, which acknowledges and capitalizes on their background, expertise, and perspectives. Such is the basis for judgements about the quality and relevance of literature along with how it might be interpreted to build understanding in relation to the focal phenomenon.12 Thus, critical reviews engage with interpretive qualitative research traditions. The goal is not to create generalizable truth, eliminate bias, or produce perfectly replicable methods; instead, it is to capitalize on the unique outlooks developed by researchers during the review process. Rather than seeking to describe or define “what worked,” the purpose of these reviews is to reconceptualize and question assumptions, which often culminates in a proposal for a new theoretical perspective or model.2,12,13The necessarily loose boundaries around critical reviews that this approach creates can cause frustration because others exploring the same issues in the same way may not draw upon the same literature or replicate a specific search strategy. More than a necessary evil, that is a strength of critical reviews because the review team and their unique interpretations and methodological decisions are considered valuable components of the research process. Thus, critical reviews are not the right review type for those seeking (as authors or readers) a definitive or final solution to a specific problem.As with all research processes it is important for authors to try to avoid only marshaling evidence that supports their claims while ignoring contradictory data; doing so does not mean one should attempt to include everything to avoid “biased” selection. Instead, critical reviewers must be reflexive14 and transparent about how research decisions were made. Rather than seeing disagreements among team members with different expertise or perspectives as problematic, differences can be an opportunity to challenge assumptions and ensure that decisions are well thought out.15 Determining how literature or theories from fields outside HPE may inform the problem under review requires a deep understanding of how the phenomenon of interest has been understood in HPE. Hence, most critical review tasks cannot be turned over to a research assistant with instructions to follow a particular process.Despite these complexities, critical reviews are indispensable when established theoretical and methodological approaches have come up short. They allow investigators to experiment by creatively and organically exploring what insights can be drawn from the juxtaposition of broad and diverse literature, to reflect on assumptions that have been built into conceptions of the problem, to consider how perspectives might change when adopting different disciplinary lenses, and to enable the development of new ideas that may “unstick” thinking.In our analysis of recent critical reviews, methods sections varied widely. In fact, about a third included no discernible methods section at all. Nonetheless, we were able to identify several hallmarks of critical review methods that appear to illustrate best practices. We would urge caution with respect to treating these elements as linear because, in our experience and among the reviews we examined, literature search and analysis in critical reviews are most often concurrent and iterative processes.As noted, critical reviewers take a constructivist stance. While authors of these reviews rarely state an explicit epistemological or theoretical perspective, many draw on methodological tools from other established review types or from general qualitative research to support their process. As a rare example of explicit methodological blending or borrowing, Pedersen et al drew on philosophical hermeneutics to frame the interpretive processes underlying their review on empathy in medical education.16 We believe that offering an explicit guiding rationale for the study and evidence of efforts made to extend the authors' thinking beyond their original assumptions is more important than using a specific frame. Labels noting a particular epistemological perspective or theory should not take the place of rich descriptions of what was actually done and why. Addressing the assumptions and logic underlying the methodological decisions not only acknowledges the role of the researcher in the development of their data and interpretations, but also offers the reader more information with which to evaluate the adequacy of the arguments.As with other review methods, reviewers should explicitly state their research questions or study objectives,17 to provide clarity of purpose and the opportunity to judge alignment between aims and methodological choices. In the case of critical reviews, research questions are more often explorative than definitive; as such, they tend to evolve over the course of the review.18 Research questions generally focus on integrating new literature from a variety of disciplinary perspectives to develop a new approach, understanding, or framework for thinking about the focal phenomenon or topic. For example, in striving to understand assessment practices common to graduate medical education, Gingerich et al19 sought to develop a “synthesis of related research domains focused on understanding the source of variance in social judgments,” with the intent to “stimulate different ways of asking questions about the limitations of rater-based assessments prior to negotiating potential solutions.”Literature searches conducted for a critical review should focus on identifying sources of particular relevance rather than capturing everything that has been written on the subject. This may mean finding seminal articles, such as highly cited literature reviews, that offer trustworthy overviews of the theory, assumptions, and evidence cited by researchers from several disciplines. Unlike other narrative reviewers, critical reviewers often utilize methodological tools that go beyond standard database searches to ensure exposure to unfamiliar terms and literature. Consultation with individuals with relevant content as well as theoretical or methodological expertise not represented on the review team can guide searches and ensure that the most relevant sources and key features of unfamiliar literature are captured.16,20 Hand-searching reference lists and citations can prove vital in finding central texts from other fields.In the critical reviews we examined, some authors offered no description of their search approach, while others offered exhaustive lists of search terms and databases. We suggest that reporting should offer a sense of how the search strategy was crafted, who and what resources were consulted, and what the search was (and wasn't) intended to achieve. This information can provide readers with evidence of the review's strengths and limitations. However, given that critical reviews are not intended to be exhaustive or comprehensive, the focus should be on whether the authors are likely to have uncovered valuable and insight-provoking information, not whether others can replicate the search; as such, extensive lists of search algorithm information are rarely necessary.Rather than using predefined, clear-cut inclusion and exclusion criteria, critical reviewers generally use their unique expertise and perspectives to appraise articles for inclusion based on their sense of a source's relevance to the research question and the value added by its information. As in all reviews, many resources uncovered will lack relevance or fail to meet rigor expectations, which will allow them to be easily excluded. However, critical reviewers must also make nuanced and individualized judgments to appraise the literature for quality and relevance. In doing so they will often purposively sample a small subset of the richest and/or most relevant articles gathered from their searches. Quality and information value are more important than quantity. The proximate goal is to reflect the literature well, not to stake claim to a comprehensive description, because the ultimate goal is to gain insight into the topic, not offer conclusive evidence of how often or to what magnitude something is likely to occur. For example, among the reviews we examined, authors determined inclusion in their sample based on their assessment of an article's “representativeness” of a particular discourse or approach,21 trustworthiness of the evidence it provided,22,23 conceptual contribution to the field,9 relevance to the problem,20 or potential for shifting discourse.7,20These abstract and subjective evaluations can be difficult to describe concisely. We suggest that authors of critical reviews should strive to define the criteria for their judgements about inclusion and how or if they were guided by concepts such as saturation24,25 or theoretical sufficiency,26 as commonly applied in qualitative research. In other words, rigor can be demonstrated by showing that the generation of data (through expert consults, searches, and sampling) and analyses appear to do justice to the literature such that continued exploration offers lessening return. This requires investigators to reach a point where they see redundancies in the articles encountered and be able to generate a cohesive representation of the phenomenon under study. Strategies such as regular discussion among team members with diverse backgrounds and combining multiple search approaches (eg, databases, expert consultation, hand-searches) can support reflexivity, which ensures that the review team challenged their own thinking and that their stated results represent a robust picture of relevant concepts.Analysis methods for critical reviews can align well with, and borrow from, other qualitative research methodologies. For example, in one study the authors drew on content analysis to structure the development of themes from their data.27 Other authors used forms of discourse analysis to examine how included articles described the concepts under review, rather than focusing on the primary literature's findings or discussion.28,29 Qualitative analytic approaches have great potential to enhance the rigor of critical reviews by offering structure and focus in a way that is familiar to those conducting the review and their readers. It is important that investigators be thoughtful about selecting analytical methods that are congruent with their objectives and other aspects of their methods. For example, discourse analysis might be more appropriate for examining how included sources discuss the phenomenon, whereas other techniques, such as content or thematic analysis, might be more helpful for focusing on what was discussed.We agree with Grant and Booth that, regardless of analysis methods, a critical review's “product perhaps most easily identifies it” because critical reviews “typically manifest in a hypothesis or a model, not an answer.”2 The result should leave the reader with a new way of thinking that is coherent and credible, resonates in their context, and has potential to shift their practice. Given that critical reviews are diverse in focus and scope—which is one of their selling points—we found no specific reporting guidelines. To ensure transparent and credible reporting, we direct investigators to general reporting guidelines for qualitative research, such as the article on standards for reporting qualitative research from O'Brien et al,30 rather than suggesting reporting guidelines for other review types. The latter tend to focus on exhaustive reporting of search methods, rather than articulating the logic used to guide sampling strategies and analysis. Transparency in reporting is critical as there is no absolute roadmap.18The most interesting research questions in HPE, in our opinion, are not “What was done?” or “Does it work?”31,32 but those that instead challenge accepted assumptions about concepts and practices. To better understand the phenomena of interest and, in turn, better direct practice, HPE researchers need to be open to new ways of understanding and thinking. To this end critical reviews offer an invaluable tool for interrogating the boundaries of our approaches and knowledge and for generating novel insights that can yield creative solutions with the potential to shift both research directions and practices.
更多
查看译文
关键词
critical reviews,education,health,research
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要