Representing narrative evidence as clinical evidence logic statements

JAMIA OPEN(2022)

Cited 0|Views14
No score
Abstract
Lay Summary Representing clinical evidence for clinical decision support (CDS) requires the evidence to be in a standard shareable format. Clinical Evidence Logic Statements (CELS) are shareable and come in an "If-Then" format. Representing evidence as CELS can be a difficult task and incomplete or incorrect representations may not reflect the true intent of the evidence. In this study, we assessed factors that may facilitate CELS representation, including training sessions, evidence structure, and educational level of CELS authors. First, we asked researchers to represent CELS from narrative evidence. Then they completed training sessions, after which they were asked to represent more CELS. We compared the accuracy of CELS representation: 1) pre- and post-intervention, 2) between structured and semi-structured evidence, and 3) between CELS authored by specialty-trained vs. non-specialty-trained researchers. Accuracy of CELS representation significantly increased post-intervention. CELS represented from structured evidence had a higher representation accuracy compared to semi-structured evidence. Similarly, specialty-trained authors had higher accuracy when representing structured evidence. Training sessions with explicit authoring instructions significantly improved CELS representation, although the task remains difficult with many CELS remaining inaccurately represented compared to those authored by knowledge representation experts with experience in representing evidence for CDS. Objective Clinical evidence logic statements (CELS) are shareable knowledge artifacts in a semistructured "If-Then" format that can be used for clinical decision support systems. This project aimed to assess factors facilitating CELS representation. Materials and Methods We described CELS representation of clinical evidence. We assessed factors that facilitate representation, including authoring instruction, evidence structure, and educational level of CELS authors. Five researchers were tasked with representing CELS from published evidence. Represented CELS were compared with the formal representation. After an authoring instruction intervention, the same researchers were asked to represent the same CELS and accuracy was compared with that preintervention using McNemar's test. Moreover, CELS representation accuracy was compared between evidence that is structured versus semistructured, and between CELS authored by specialty-trained versus nonspecialty-trained researchers, using chi(2) analysis. Results 261 CELS were represented from 10 different pieces of published evidence by the researchers pre- and postintervention. CELS representation accuracy significantly increased post-intervention, from 20/261 (8%) to 63/261 (24%, P value < .00001). More CELS were assigned for representation with 379 total CELS subsequently included in the analysis (278 structured and 101 semistructured) postintervention. Representing CELS from structured evidence was associated with significantly higher CELS representation accuracy (P = .002), as well as CELS representation by specialty-trained authors (P = .0004). Discussion CELS represented from structured evidence had a higher representation accuracy compared with semistructured evidence. Similarly, specialty-trained authors had higher accuracy when representing structured evidence. Conclusion Authoring instructions significantly improved CELS representation with a 3-fold increase in accuracy. However, CELS representation remains a challenging task.
More
Translated text
Key words
health information technology, clinical decision support, knowledge representation
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined