Difference between revisions of "Formative Evaluation of the Accuracy of a Clinical Decision Support System for Cervical Cancer Screening"

From Clinfowiki
Jump to: navigation, search
(Methods)
(Methods)
Line 9: Line 9:
 
== Methods ==
 
== Methods ==
  
Clinicians who are target users of the [http://www.cancer.gov/cancertopics/pdq/screening/cervical/Patient/page3|cervical cancer screening] CDS were asked to evaluate test cases, and select their recommendation from guideline-based options provided in the web interface.  A total of 25 clinicians each evaluated 7 unique test cases.  If their recommendations were different from what the CDS would have suggested, 2 clinicians who were not involved in the CDS development then reviewed the test cases.
+
Clinicians who are target users of the [http://www.cancer.gov/cancertopics/pdq/screening/cervical/Patient/page3 |cervical cancer screening] CDS were asked to evaluate test cases, and select their recommendation from guideline-based options provided in the web interface.  A total of 25 clinicians each evaluated 7 unique test cases.  If their recommendations were different from what the CDS would have suggested, 2 clinicians who were not involved in the CDS development then reviewed the test cases.
  
 
== Results ==
 
== Results ==

Revision as of 16:29, 15 March 2015

This is a review of Wagholikar et al's 2013 article, Formative Evaluation of the Accuracy of a Clinical Decision Support System for Cervical Cancer Screening.[1]


Background

The authors of this article discuss the pre-launch testing they performed on a clinical decision support (CDS) for cervical cancer screening. Instead of piloting the CDS in select patient care settings, they instead performed a formative evaluation wherein a side-by-side comparison was performed between interventions the CDS would recommend and what clinicians would recommend without the aid of CDS.


Methods

Clinicians who are target users of the |cervical cancer screening CDS were asked to evaluate test cases, and select their recommendation from guideline-based options provided in the web interface. A total of 25 clinicians each evaluated 7 unique test cases. If their recommendations were different from what the CDS would have suggested, 2 clinicians who were not involved in the CDS development then reviewed the test cases.

Results

The clinicians and CDS matched in their recommendations in 94 cases. There were 75 mismatched recommendations, 22 of which were noted to be CDS failures due to modeling and programming errors. The rest of the mismatches were provider errors in recommending the best guideline-based order, which according to the authors would be optimized with the use of the CDS.


Conclusion

The authors concluded that the formative evaluation they used allowed them to pinpoint failures in the CDS before it was launched. They believe this method is better than pilot testing because they were able to test not only common patient scenarios, but also less frequent but very important cases.


Comments

I think the formative evaluation the authors used is highly effective. Aside from identifying CDS failures, it also enabled them to get a baseline data of how compliant clinicians are with guideline-based screening. However, I believe that pilot testing should still be done in conjunction with this evaluation see how the CDS performs - and how clinicians interact with it - in an actual patient care environment.

References

  1. Wagholikar, K. B., MacLaughlin, K. L., Kastner, T. M., Casey, P. M., Henry, M., Greenes, R. A., . . . Chaudhry, R. (2013). Formative evaluation of the accuracy of a clinical decision support system for cervical cancer screening. Journal of the American Medical Informatics Association, 20(4), 749-757. doi: 10.1136/amiajnl-2013-001613. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3721177/