Difference between revisions of "Determining differences in user performance between expert and novice primary care doctors when using an electronic health record (EHR)"

From Clinfowiki
Jump to: navigation, search
(Conclusion)
Line 42: Line 42:
 
== Results ==
 
== Results ==
  
The demographic survey provided data around each participant’s age, race, gender, length of experience with EHRs and area of specialism.  The average age of the novice group was 28 years, while that of the expert group was 31years.  The breakdown of the results for the other criteria is outlined but as the sample size was small that the numbers for these are not likely to be statistically relevant <ref name = "Barnum"> Barnum, C. (2003) The magic number 5: is it enough for web testing? Information Design Journal, 11, 160–170. </ref>
+
The demographic survey provided data around each participant’s age, race, gender, length of experience with EHRs and area of specialism.  The average age of the novice group was 28 years, while that of the expert group was 31years.  The breakdown of the results for the other criteria is outlined but as the sample size was small that the numbers for these are not likely to be statistically relevant.
  
 
* The results for the performance measures for the 19 tasks showed :
 
* The results for the performance measures for the 19 tasks showed :
Line 72: Line 72:
 
The results of this study were surprising, in that there was no difference in performance or user satisfaction between the two groups despite expert users having a year or more experience with using the system.  
 
The results of this study were surprising, in that there was no difference in performance or user satisfaction between the two groups despite expert users having a year or more experience with using the system.  
  
The sample size was small, something that the authors admit but state that a review of literature indicates a minimum sample size of 10 as sufficient for an explorative usability study for identifying relevant usability problems.
+
The sample size was small, something that the authors admit but state that a review of literature indicates a minimum sample size of 10 as sufficient for an exploratory usability study for identifying relevant usability problems. <ref name = "Barnum"> Barnum, C. (2003) The magic number 5: is it enough for web testing? Information Design Journal, 11, 160–170. </ref>
  
 
This paper indicates that user interface design and EHR training is important in improving user performance and usability for EHRs.
 
This paper indicates that user interface design and EHR training is important in improving user performance and usability for EHRs.

Revision as of 15:39, 22 October 2015

This is a review of the article by Clarke et al (2014) titled Determining differences in user performance between expert and novice primary care doctors when using an electronic health record (EHR).[1]

Background

The National Center for Health Statistics reported that 78% of office based doctors have adopted Electronic Health Record systems (EHR)s in 2013. While EHRs provide many benefits such as complete up-to-date patient information and alerts to critical lab values, there are also some potential disadvantages associated in the form of increased doctor’s time used and loss of productivity, related to usability issues of EHRs.

Many primary health care doctors do not receive adequate training in the use of EHRs in medical school and face a steep learning curve when using EHRs. This along with busy schedules and poor EHR usability may not only have a negative impact on their learning experience but also result increased cognitive load and medical errors as well as a decrease in the quality of patient care

Usability is defined here as “how well a system can be operated by users to complete a certain task with effectiveness, efficiency and satisfaction” [1]

The authors of this study aimed to investigate the difference in expert and novice primary care doctors using a leading EHR in order to investigate:

  • their respective performance measures – in terms of percentage task success, time on task (TOT), mouse clicks (MC) and mouse movements (MM)
  • whether there was any correlation between the expert and novice for tasks
  • whether the expert and novice doctors rated the usability of the EHR system

Methods

Ten novice doctors and seven expert doctors participated in the study, which took place in University of Missouri Health System (UMHS), a 536 bed tertiary care academic medical hospital in Columbia, Missouri. The participants were from the areas family and internal medicine.

Participants

First year residents were classified as novice EHR users, while second year residents and above were classified as expert users and had around a year or more of EHR experience.

Performance Measures

As part of the usability testing, which involved data collection via the video analysis software Morae from Techsmith, participants were required to complete a series of 19 tasks. The user performance was evaluated for:

  1. Percent task success – the percentage of subtasks that were successfully completed
  2. TOT - time on task which was the length of time taken to complete a task
  3. MC – mouse click which was the number of times the user clicks on the mouse when completing a task
  4. MM – mouse movement which was the distance, in pixels, of the navigation path by the mouse during the execution of a task

System Usability Scale (SUS)

Each participant was required to complete a system usability assessment of the system in the form of a 10 item Likert scale, which was a subjective assessment of the EHR.

Data collection and analysis

The usability tests were completed in approximately 20 minutes and using the think aloud strategy, where the participant was instructed to read out the printed instructions containing the test scenario and 19 tasks. The tests were recorded using the Morae software.

After completion of the tasks the participants were asked to complete the SUS and demographic survey. As part of the data analysis the recorded sessions were reviewed and the 19 tasks divided into subtasks to determine task success rate and highlight any usability challenges.

The Pearson statistical test was used to compare performance measures and SUS, to identify any potential correlation [2]

Results

The demographic survey provided data around each participant’s age, race, gender, length of experience with EHRs and area of specialism. The average age of the novice group was 28 years, while that of the expert group was 31years. The breakdown of the results for the other criteria is outlined but as the sample size was small that the numbers for these are not likely to be statistically relevant.

  • The results for the performance measures for the 19 tasks showed :
  1. Percentage task success rate – there was no significant difference between the novice and expert doctor groups. Both groups had the lowest success rate with task 7 adding a medication to a favorites list.
  2. TOT – there was no significant difference between the mean values for this between the two doctor groups.
  3. MC – while expert doctors completed the tasks with slightly fewer MC that novice doctors this difference was not statistically significant.
  4. MM – again while the expert group showed slightly short MM that the novice group this was not statistically significant.
  • Correlation analysis for both groups showed that MC and MM figures increased for more time spent completing the task but was associated with a decrease in task success rate.
  • The SUS results showed that there was no significant difference in the usability rating given for the EHR between the novice and expert doctor groups.
  • The Pearson correlation coefficient analysis indicated that there was no correlation between the task success rate and the participants' perception of user friendliness of the system

Conclusion

The results indicate that while expert doctors may have longer experience with an EHR system, this may not increase their proficiency with the system. The SUS results indicated that length of time of experience of using the EHR did not affect the acceptance of the EHR by novice or expert doctors.

In addition, the results showing the inverse relationship between increased MC and MM figures with decreased task success rate indicate that increased time did not guarantee higher task completion success, which may reflect poor usability in the EHR especially for some tasks e.g. task 16 where doctors were not clear which option to select in the system menu for changing a medication.

The overall results indicate that there was no difference in task performance or usability assessment between novice and seasoned doctors using the EHR

The authors made some comparisons with other usability studies that had indicated similar findings for usability assessment and proficiency with participants from nursing and other health care provider groups.

The overall conclusion was that results of this study might assist EHR vendors to improve their user interface design and be considered when improving EHR training programs for doctors to ensure improvements in doctors’ performance when using EHR systems.

Comments

The results of this study were surprising, in that there was no difference in performance or user satisfaction between the two groups despite expert users having a year or more experience with using the system.

The sample size was small, something that the authors admit but state that a review of literature indicates a minimum sample size of 10 as sufficient for an exploratory usability study for identifying relevant usability problems. [3]

This paper indicates that user interface design and EHR training is important in improving user performance and usability for EHRs.

Related Articles

References

  1. 1.0 1.1 Clarke, M. A., Belden, J. L., & Kim, M. S. (2014). Determining differences in user performance between expert and novice primary care doctors when using an electronic health record (EHR). Journal of evaluation in clinical practice, 20(6), 1153-1161. doi: 10.1111/jep.12277. Retrieved from: http://www.ncbi.nlm.nih.gov/pubmed/?term=Determining+differences+in+user+performance+between+expert+and+novice+primary+care+doctors+when+using+an+electronic+health+record
  2. - Van Someren, M. W., Barnard, Y. F. & Sandberg, J. A. (1994) The Think Aloud Method: A Practical Guide to Modelling Cognitive Processes. London: Academic Press.
  3. Barnum, C. (2003) The magic number 5: is it enough for web testing? Information Design Journal, 11, 160–170.