Difference between revisions of "Reducing Emergency Department Charting and Ordering Errors with a Room Number Watermark on the Electronic Medical Record Display"

From Clinfowiki
Jump to: navigation, search
(Design)
(Measurements)
Line 21: Line 21:
  
 
Data from each study subject survey form was manually entered into a spreadsheet (Microsoft Excel, Microsoft Corporation, Redmond, WA). Descriptive statistics were tabulated using the built-in functions of the spreadsheet.
 
Data from each study subject survey form was manually entered into a spreadsheet (Microsoft Excel, Microsoft Corporation, Redmond, WA). Descriptive statistics were tabulated using the built-in functions of the spreadsheet.
 +
 +
A charting error was defined as key stroke into a note on the wrong patient's chart, even if the error was then discovered immediately and the note was purged. An ordering error was defined as entering an order on the wrong patient even if it was discovered immediately and the order was cancelled. Ordering error counts were defined in terms of episodes rather than the actual number of orders. For example, if a physician ordered three medications on the wrong patient at the same time, this was considered to be one error episode.
  
 
== Results ==
 
== Results ==

Revision as of 19:02, 11 February 2015

This is a review for Loren G Yamamoto’s Reducing Emergency Department Charting and Ordering Errors with a Room Number Watermark on the Electronic Medical Record Display. [1]

Research question

A survey of Emergency Department (ED) clinicians conducted to assess the frequency of errors in charting, and entering orders on the wrong patients chart in the electronic medical record, and clinician opinion was sought on whether a simple watermark of the patient’s room number might help reduce the number of these EMR “wrong patient errors. Would a room number watermark be an effective strategy to reduce wrong patient errors in charting?

Methods

Environment

Emergency Department clinicians using an electronic medical record.

Design

During calendar year 2012, attending general emergency physicians, attending pediatric emergency physicians, ED nurses, and ED clinical assistants were asked in person to participate in a voluntary survey as a study subject. Participant responses were collected in person by the study investigator after verbal consent was obtained. The survey recorded the number of years of clinical experience of the study subjects. Nurses and clinical assistants were asked to approximate the number of hours worked during the previous 3 months. Physicians were asked to approximate the number of patient encounters during the previous 3 months. The different responsibilities of the physicians, nurses, and clinical assistants required the protocol to assess errors within their scope of respective responsibilities. The survey asked study subjects if they had ever made an error in which charting or order entry (physicians only) was done in the wrong patient's chart. The survey then asked study subjects for an approximate number of times this occurred in the last 3 months. Nurses were also asked if they noticed an ordering error (made by the physician) on the wrong patient's chart and to approximate the number of times this occurred in the last 3 months.

Measurements

Study subjects were then shown the standard EMR screen (what they normally see), then an identical EMR screen with room number watermarks added to the patient chart and tabs. In addition, a verbal description of how the two screens differed was provided. Subjects were asked if they thought that the addition of the room number watermark and the room number on the tabs could potentially reduce the number of wrong patient charting/ordering errors. If they responded yes, then they were asked whether they thought this would eliminate just a few, roughly half, or most of the errors.

Data from each study subject survey form was manually entered into a spreadsheet (Microsoft Excel, Microsoft Corporation, Redmond, WA). Descriptive statistics were tabulated using the built-in functions of the spreadsheet.

A charting error was defined as key stroke into a note on the wrong patient's chart, even if the error was then discovered immediately and the note was purged. An ordering error was defined as entering an order on the wrong patient even if it was discovered immediately and the order was cancelled. Ordering error counts were defined in terms of episodes rather than the actual number of orders. For example, if a physician ordered three medications on the wrong patient at the same time, this was considered to be one error episode.

Results

The models’ performance ranged from 0.49 to 0.56 (kappa). The AUC of the best model ranged from 0.73 to 0.99. Content topics adult dose, pediatric dose, patient education, and pregnancy category resulted in the best performance for choice prediction.

Main results

The five strongest individual predictors were avg reads, orders entered, avg writes, patient age, and parent level 3. Classifiers showed average kappa scores ranging between 0.47 (rules) and 0.56 (Stacking) indicating an overall moderate level of agreement. Stacking outperformed the other two competitors in all 10 bootstrapped test sets in spite of there being no statistical difference among the Stacking, Bayesian network, and SVM classifiers. The boosted algorithms were not statistically significance in their difference from their non-boosted counterparts. The learning methods showed varied performance levels regarding the prediction of each individual class.

Conclusion

The results suggest that while information needs are strongly affected by the characteristics of users, patients, and medications classification models based on Infobutton usage data are a promising method for the prediction of content topics that a clinician would choose to answer patient care questions while using an EMR system.

Commentary

In contemplating this study it is important to take particular note of the key limitation: because the attribute sets and classification models were developed specific to the researchers’ institution the specifics of the study will probably not generalize to other institutions. Recognizing that the outcomes are likely to be different on an institution by institution basis it is then possible to be guided by the learning methods and subset of attributes that were used in the study.

The overview of machine learning algorithms is helpful particularly when combined with the performance note that their experience was that Boosting algorithms did not significantly improve upon the performance on non-boosted counterparts contrary to prior studies, something that could be explained by the base algorithms that they chose to use were already strong learners rather than weaker ones used in previous comparison studies. Overall the researchers show strong awareness of the potential influences on the outcomes. For example they acknowledge that avg reads, avg writes, and orders entered are attributes that are more likely to appear in quantities for outpatient clinicians, potentially giving a better description of users than specialty and discipline.

They give a nice overview of some potential usability goals under Discussion: “to present the minimal amount of information to support quick decisions, reducing unnecessary navigation steps and exposure to irrelevant information.” That excellent summary would make an excellent goals statement for many infobutton or help button projects. The data transformation to Parent level 3, or main drug ingredient, was extremely well done.

The one slightly questionable area fell into data cleaning where sessions of more than four topics were categorized as likely to have been for “testing, demonstration, or training purposes” rather than confusion, misdirection or explorative hunting. Definitely an article worth reading for anyone looking to improve information retrieval via predictive classification models, as the methods are extremely interesting even if the results are likely to be different on an institution by institution basis.

References

  1. Yamamoto, LG. Reducing Emergency Department Charting and Ordering Errors with a Room Number Watermark on the Electronic Medical Record Display. Hawaii J Med Public Health. 2014 Oct;73(10):322-8 . doi:10.1016/j.jbi.2008.07.00. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/25337450