Difference between revisions of "Using qualitative studies to improve the usability of an EMR"

From Clinfowiki
Jump to: navigation, search
(Created page with "This is a review for Alan F. Rose, Jeffrey L. Schnipper, Elyse R. Park, Eric G. Poon, Qi Li, and Blackford Middleton application of qualitative guidelines in the assessment an...")
 
Line 14: Line 14:
 
=== Design ===
 
=== Design ===
  
Each of the qualitative studies focused on users of the Longitudinal Medical Record, a web-based application that facilitates the management of patient information, provides clinical messaging, and standardizes methods of data entry and retrieval.  Then the following three forms of evaluation were applied:
+
Each of the qualitative studies focused on users of the Longitudinal Medical Record (LMR), a web-based application that facilitates the management of patient information, provides clinical messaging, and standardizes methods of data entry and retrieval.  Then the following three forms of evaluation were applied:
  
 
''Task Analysis''
 
''Task Analysis''
Line 30: Line 30:
 
=== Measurements ===
 
=== Measurements ===
  
Each of the qualitative studies focused on users of the Longitudinal Medical Record, a web-based application that facilitates the management of patient information, provides clinical messaging, and standardizes methods of data entry and retrieval.
+
Each of the qualitative studies focused on users of the Longitudinal Medical Record, a web-based application that facilitates the management of patient information, provides clinical messaging, and standardizes methods of data entry and retrieval.  
  
 
== Results ==
 
== Results ==
Line 38: Line 38:
 
=== Main results ===
 
=== Main results ===
  
 +
The two qualitative studies showed that there were a lot of consistencies in issues with usability of LMR.  Deficiencies were identified specifically with regard to the following aspects:
 +
 +
[[Navigation]]
 +
 +
Both studies qualified the navigation aspect as "awkward" and subjectively used too many clicks to get data entered or retrieved.  Too many popup menus were offered which crowded the screen.  Physicians created workarounds by opening up multiple browsers which was not ideal as it was time-consuming and consumed the system's resources, slowing down the computer. 
 +
 +
[[Information Design]]
 +
 +
The presentation of the screens created issues.  TThe Results Manager’s usage of color and low contrast with data objects that are in their “selected” state made it difficult to read or identify the information quickly.  In addition, there was a poor balance of displaying what the provider needed with what was available in "one click away" from the current screen.
 +
 +
[[Customization]]
 +
 +
Comments regarding customization were targeted primarily toward the letter-writing feature in Results Manager. Many physicians often used their own letters and found the pre-defined letter templates of Results Manager to be inadequate for all their workflow needs.
 +
 +
[[Workflow]]
 +
 +
The participants came from a variety of workflow backgrounds.  Some blocked off time at the end of the day to enter notes, while others entered at the end of each patient visit.  The biggest complaint from workflow again came from navigational issues--specifically the popup menus which slowed productivity.
  
  
The five strongest individual predictors were avg reads, orders entered, avg writes, patient age, and parent level 3. Classifiers showed average kappa scores ranging between 0.47 (rules) and 0.56 (Stacking) indicating an overall moderate level of agreement.  Stacking outperformed the other two competitors in all 10 bootstrapped test sets in spite of there being no statistical difference among the Stacking, Bayesian network, and SVM classifiers.  The boosted algorithms were not statistically significance in their difference from their non-boosted counterparts. The learning methods showed varied performance levels regarding the prediction of each individual class.
 
  
 
== Conclusion ==
 
== Conclusion ==
  
The results suggest that while information needs are strongly affected by the characteristics of users, patients, and medications classification models based on Infobutton usage data are a promising method for the prediction of content topics that a clinician would choose to answer patient care questions while using an EMR system.   
+
Through a thorough examination of two studies, it can be shown that qualitative research can help focus attention on user tasks and goals and identify patterns of care.  Findings from both studies found consistency with regards to issues with the organization of information in the display, interference with workflow patterns of primary care physicians, and the availability of visual cues.   
 +
 
  
 
== Commentary ==
 
== Commentary ==

Revision as of 23:54, 8 February 2015

This is a review for Alan F. Rose, Jeffrey L. Schnipper, Elyse R. Park, Eric G. Poon, Qi Li, and Blackford Middleton application of qualitative guidelines in the assessment and improvement of EMR usability. [1]

Research question

Are studies based on qualitative data efficacious in helping improve the usability of EMRs and if so what design solutions can be recommended?


Methods

Environment

Two separate qualitative studies that attempted to identify user task flows with an existing EMR, to better understand the environment in which these tasks are performed, and to determine how overall usability can be improved.

Design

Each of the qualitative studies focused on users of the Longitudinal Medical Record (LMR), a web-based application that facilitates the management of patient information, provides clinical messaging, and standardizes methods of data entry and retrieval. Then the following three forms of evaluation were applied:

Task Analysis

Task analysis clarifies the objectives of each task, which tasks are most important to users, and which tasks depend on other tasks

Focus Groups

Focus groups are an informal and relatively unstructured exercise that can help assess user needs and feelings both before and after system design

Collaboration

Task analysis and focus group studies were conducted independently of each other with an agreement between their respective administrators to collaborate and identify common themes during the data analysis phase. This resulted in a joint effort to systematically compare the results of our inspection and propose solutions for enhancing LMR’s usability

Measurements

Each of the qualitative studies focused on users of the Longitudinal Medical Record, a web-based application that facilitates the management of patient information, provides clinical messaging, and standardizes methods of data entry and retrieval.

Results

Findings from both studies raised issues with the amount and organization of information in the display, interference with workflow patterns of primary care physicians, and the availability of visual cues and feedback. These findings were then used to recommend user interface design changes.

Main results

The two qualitative studies showed that there were a lot of consistencies in issues with usability of LMR. Deficiencies were identified specifically with regard to the following aspects:

Navigation

Both studies qualified the navigation aspect as "awkward" and subjectively used too many clicks to get data entered or retrieved. Too many popup menus were offered which crowded the screen. Physicians created workarounds by opening up multiple browsers which was not ideal as it was time-consuming and consumed the system's resources, slowing down the computer.

Information Design

The presentation of the screens created issues. TThe Results Manager’s usage of color and low contrast with data objects that are in their “selected” state made it difficult to read or identify the information quickly. In addition, there was a poor balance of displaying what the provider needed with what was available in "one click away" from the current screen.

Customization

Comments regarding customization were targeted primarily toward the letter-writing feature in Results Manager. Many physicians often used their own letters and found the pre-defined letter templates of Results Manager to be inadequate for all their workflow needs.

Workflow

The participants came from a variety of workflow backgrounds. Some blocked off time at the end of the day to enter notes, while others entered at the end of each patient visit. The biggest complaint from workflow again came from navigational issues--specifically the popup menus which slowed productivity.


Conclusion

Through a thorough examination of two studies, it can be shown that qualitative research can help focus attention on user tasks and goals and identify patterns of care. Findings from both studies found consistency with regards to issues with the organization of information in the display, interference with workflow patterns of primary care physicians, and the availability of visual cues.


Commentary

In contemplating this study it is important to take particular note of the key limitation: because the attribute sets and classification models were developed specific to the researchers’ institution the specifics of the study will probably not generalize to other institutions. Recognizing that the outcomes are likely to be different on an institution by institution basis it is then possible to be guided by the learning methods and subset of attributes that were used in the study.

The overview of machine learning algorithms is helpful particularly when combined with the performance note that their experience was that Boosting algorithms did not significantly improve upon the performance on non-boosted counterparts contrary to prior studies, something that could be explained by the base algorithms that they chose to use were already strong learners rather than weaker ones used in previous comparison studies. Overall the researchers show strong awareness of the potential influences on the outcomes. For example they acknowledge that avg reads, avg writes, and orders entered are attributes that are more likely to appear in quantities for outpatient clinicians, potentially giving a better description of users than specialty and discipline.

They give a nice overview of some potential usability goals under Discussion: “to present the minimal amount of information to support quick decisions, reducing unnecessary navigation steps and exposure to irrelevant information.” That excellent summary would make an excellent goals statement for many infobutton or help button projects. The data transformation to Parent level 3, or main drug ingredient, was extremely well done.

The one slightly questionable area fell into data cleaning where sessions of more than four topics were categorized as likely to have been for “testing, demonstration, or training purposes” rather than confusion, misdirection or explorative hunting. Definitely an article worth reading for anyone looking to improve information retrieval via predictive classification models, as the methods are extremely interesting even if the results are likely to be different on an institution by institution basis.

References

  1. . Creating Using qualitative studies to improve the usability of an EMR. J Biomed Inform. 2005 February ; 38(1): 51-60. http://dx.doi.org/10.1016/j.jbi.2004.11.006