Are three methods better than one

From Clinfowiki
Jump to: navigation, search

This is a review of the article by Walji et al (2014) that studied the effectiveness of three methods to evaluate usability in EHRs.

Are three methods better than one? A comparative assessment of usability evaluation methods in an EHR

Muhammad F. Walji, Elsbeth Kalenderian, Mark Piotrowski, Duong Tran, Krishna K. Kookal, Oluwabunmi Tokede, Joel M. White, Ram Vaderhobli, Rachel Ramoni, Paul C. Stark, Nicole S. Kimmes, Maxim Lagerweij, Vimla L. Patel

International Journal of Medical Informatics Volume 83, Issue 5, May 2014, Pages 361–367


Usability is often cited as a major barrier to the wider adoption of EHRs. Poor usability has been proven to reduce efficiency, decrease physician satisfaction and potentially compromises patient safety. [1]

Selecting an appropriate method or combination of methods for effectively evaluating EHRs can be a challenge. The authors evaluated three different methods for their effectiveness in detecting usability issues in EHRs


The study was conducted with a mixture of third and fourth year students, residents and faculty from two dental schools at Harvard School of Dental Medicine (HSDM) and University of California San Francisco (UCSF). These schools were already using the dental EHR, axiUm and dental terminology standard EzCodes. [2] Participants from these schools conducted usability evaluation of the EHR using three usability methods: user testing, interviews and survey.

User Testing

Over a three-day site visit 32 end users from the dental schools performed a series of tasks, which has been developed in collaboration with dentists and researchers from University of Texas Health Sciences Center (UTH). Participants were asked to think aloud while conducting the task and Hierarchical Task Analysis (HTA) and Keystroke Level Model (KLM) were used to analyze the path and time of the tasks.

Semi-structured Interview

Researchers conducted 30 minute interviews with 36 participants to capture feedback data about the EHR use with regards to EZ code terminology, workflows and interface. The data was analyzed using MS Excel.

Survey Questionnaire

After the site visit 35 participants were asked to complete questionnaires comprising of 29 statement and four open-ended questions. The open-ended questions were around the usability of the EHR for diagnoses and terminology functionality.


The data for all the methods were analyzed and the problems categorized into the three themes of EHR user interface, diagnostic terminology and clinical work domain/workflow.

The degree of overlap of the problem themes identified, was also mapped for all three methods.


A total of 187 problems were detected with the three methods: 54% user testing, 28% user interviews and 18% by survey.

In terms of the problem themes, user interface, terminology and workflows the method of user testing identified problems at 100%, 80% and 67% respectively. For user interviews the figures for identification of problems with the method were 80%, 60% and 33% respectively. Similarly for the survey questionnaires the figures were 30%, 40% and 22%.

The overlap analysis showed five common themes between all methods with 12 overlapping themes between user testing and interviews. User testing and survey method had 6 overlapping themes while survey and interview had five.


The results showed user testing with the think aloud was the most effective method for identifying problems, with interviews method next, followed by the survey questionnaire.

In addition, the overlap analysis results showed that while all the methods were able to identify errors or problems when conducting usability studies for EHRs, that a combination of methods would be better than any single method alone.


The study was an interesting look at the effectiveness of methods for studying EHRs and due to the generic nature of the methods used, could be applied to other health information technologies in addition to EHRs, such as Clinical Decision Support Systems (CDSS). The methods used were easy to understand and results easy to interpret with good tables, graphs and diagrams. The sample group was not randomized which the authors admit but the groups were relatively evenly sized across the methods. A key element for a successful EHR system is to be able to evaluate it prior, during and after implementation to make sure users concerns are addressed. It is important end users are acknowledged to ensure usability.

Related Articles


  1. Patel et al 2008. Translational cognition for decision support in critical care environments: a review
  2. Kalenderian et al 2011.The development of a dental diagnostic terminology