Difference between revisions of "Determining differences in user performance between expert and novice primary care doctors when using an electronic health record (EHR)"

From Clinfowiki
Jump to: navigation, search
(Created page with "This is a review of the article by Clarke et al (2014) titled Determining differences in user performance between expert and novice primary care doctors when using an electron...")
 
Line 1: Line 1:
 
This is a review of the article by Clarke et al (2014) titled Determining differences in user performance between expert and novice primary care doctors when using an electronic health record (EHR).
 
This is a review of the article by Clarke et al (2014) titled Determining differences in user performance between expert and novice primary care doctors when using an electronic health record (EHR).
 +
 +
'''Determining differences in user performance between expert and novice primary care doctors when using an electronic health record (EHR)'''
 +
Clarke MA, Belden JL, Kim MS.
 +
 +
J Eval Clin Pract. 2014 Dec;20(6):1153-61. doi: 10.1111/jep.12277. Epub 2014 Dec 2.
 +
http://www.ncbi.nlm.nih.gov/pubmed/?term=Determining+differences+in+user+performance+between+expert+and+novice+primary+care+doctors+when+using+an+electronic+health+record
 +
 +
== Background ==
 +
 +
The National Center for Health Statistics reported that 78% of office based doctors have adopted Electronic Health Record systems (EHR)s in 2013.  While EHRs provide many benefits such as complete up-to-date patient information and alerts to critical lab values, there are also some potential disadvantages associated in the form of increased doctor’s time and loss of productivity related to usability issues of EHRs.
 +
 +
Many primary health care doctors do not receive adequate training in the use of EHRs in medical school and face a steep learning curve when using EHRs.  This along with busy schedules and poor EHR usability may not only have a negative impact on their learning experience but also result increased cognitive load and medical errors as well as a decrease in the quality of patient care
 +
 +
Usability is defined here as “how well a system can be operated by users to complete a certain task with effectiveness, efficiency and satisfaction”.
 +
 +
The authors of this study aimed to study the difference in expert and novice primary care doctors using a leading EHR in order to investigate:
 +
1. their respective performance measures – in terms of percentage task success, time on task (TOT), mouse clicks (MC) and mouse movements (MM)
 +
2. whether there was any correlation between the expert and novice for tasks
 +
3. whether the expert and novice doctors rated the usability of the EHR system
 +
 +
== Methods ==
 +
 +
Ten novice doctors and seven expert doctors participated in the study which took place in University of Missouri Health System (UMHS, a 536 bed tertiary care academic medical hospital in Columbia, Missouri.  The participants were from the areas family and internal medicine.
 +
 +
=== Participants ===
 +
First year residents were classified as novice EHR users while second year resident and above were classified as expert users and had an average age of 31years.
 +
 +
=== Performance Measures ===
 +
 +
As part of the usability testing, which involved data collection via the video analysis software Morae from Techsmith, participants were required to complete a series of 19 tasks.  The user performance was evaluated for:
 +
1. Percent task success – the percentage of subtasks that were successfully completed
 +
2. TOT - time on task which was the length of time taken to complete a task
 +
3. MC – mouse click which was the number of times the user clicks on the mouse when completing a task
 +
4. MM – mouse movement which was the distance, in pixels, of the navigation path by the mouse during the execution of a task
 +
System Usability Scale (SUS)
 +
Each participant was required to complete a system usability assessment of the system in the form of a 10 item Likert scale which was a subjective assessment of the EHR.
 +
 +
=== Data collection and analysis ===
 +
 +
The usability tests were completed in approximately 20 minutes and using the think aloud strategy where the participant was instructed to read out the printed instructions containing the test scenario and 19 tasks.  The tests were recorded using the Morae software.
 +
 +
After completion of the tasks the participants were asked to complete the SUS and demographic survey.  As part of the data analysis the recorded sessions were reviewed and the 19 tasks divided into subtasks to determine task success rate and highlight any usability challenges.
 +
 +
The Pearson statistical test was used to compare performance measures and SUS to identify any potential correlation (Ref - Van Someren, M. W., Barnard, Y. F. & Sandberg, J. A. (1994) The Think Aloud Method: A Practical Guide to Modelling Cognitive Processes. London: Academic Press.)
 +
 +
== Results ==
 +
 +
The demographic survey provided data around each participant’s age, race, gender, length of experience with EHRs and area of specialism.  The average age of the novice group was 28 years, while that of the expert group was 31years.  The breakdown of the results for the other criteria is outlined but as the sample size was small so the numbers for these are not likely to be statistically relevant (ref - Barnum, C. (2003) The magic number 5: is it enough for web testing? Information Design Journal, 11, 160–170.)
 +
 +
The results for the performance measures for the 19tasks showed :
 +
Percentage task success rate – there was no significant difference between the novice and expert doctor groups.  Both groups had the lowest success rate with task 7 adding a medication to a favorites list.
 +
TOT – there was no significant difference between the mean values for this between the two doctor groups.
 +
MC – while expert doctors completed the tasks with slightly fewer MC that novice doctors this difference was not statistically significant.
 +
MM – again while the expert group showed slightly short MM that the novice group this was not statistically significant.
 +
 +
Correlation analysis for both groups showed that MC and MM figures increased for more time spent completing the task but was associated with a decrease in task success rate.
 +
 +
The SUS results showed that there was no significant difference in the usability rating for the EHR that was given by the novice and expert doctor groups. 
 +
 +
The Pearson correlation coefficient analysis indicated that there was no correlation between the task success rate and the participant’s perception of user friendliness of the system
 +
== Conclusion ==
 +
 +
The results indicate that while expert doctors may have longer experience with an EHR system this may not increase their proficiency with the system.  The SUS results indicated that length of time of experience of EHR use did not affect the acceptance of the EHR by novice or expert doctors.
 +
 +
In addition the results showing the inverse relationship between increased MC and MM figures with decreased task success rate indicate that increased time did not guarantee higher task completion success which may reflect poor usability in the EHR for some tasks e.g. task 16 where doctors were not clear which option to select in the system menu for changing a medication.
 +
 +
The overall results indicate that there was no difference between in task performance or usability assessment between novice and seasoned doctors using the EHR
 +
 +
The authors made some comparisons with other usability studies that had indicated similar findings for usability assessment and proficiency with participants from nursing and other health care provider groups.
 +
 +
The overall conclusion was that results of this study might assist EHR vendors to improve their user interface design and be considered when improving EHR training programs for doctors to ensure improvements in doctors’ performance when using EHR systems.
 +
== Comments ==
 +
 +
The results of this study were surprising in that there was no difference in performance or user satisfaction between the two groups despite expert users having a year or more experience with using the system.
 +
 +
The sample size was small, something that the authors admit but cite that a review of literature indicates a minimum sample size of 10 as sufficient for an explorative usability study for identifying relevant usability problems.
 +
 +
== Related Articles ==
 +
 +
 +
== References ==
 +
</Ref>

Revision as of 10:52, 21 October 2015

This is a review of the article by Clarke et al (2014) titled Determining differences in user performance between expert and novice primary care doctors when using an electronic health record (EHR).

Determining differences in user performance between expert and novice primary care doctors when using an electronic health record (EHR) Clarke MA, Belden JL, Kim MS.

J Eval Clin Pract. 2014 Dec;20(6):1153-61. doi: 10.1111/jep.12277. Epub 2014 Dec 2. http://www.ncbi.nlm.nih.gov/pubmed/?term=Determining+differences+in+user+performance+between+expert+and+novice+primary+care+doctors+when+using+an+electronic+health+record

Background

The National Center for Health Statistics reported that 78% of office based doctors have adopted Electronic Health Record systems (EHR)s in 2013. While EHRs provide many benefits such as complete up-to-date patient information and alerts to critical lab values, there are also some potential disadvantages associated in the form of increased doctor’s time and loss of productivity related to usability issues of EHRs.

Many primary health care doctors do not receive adequate training in the use of EHRs in medical school and face a steep learning curve when using EHRs. This along with busy schedules and poor EHR usability may not only have a negative impact on their learning experience but also result increased cognitive load and medical errors as well as a decrease in the quality of patient care

Usability is defined here as “how well a system can be operated by users to complete a certain task with effectiveness, efficiency and satisfaction”.

The authors of this study aimed to study the difference in expert and novice primary care doctors using a leading EHR in order to investigate: 1. their respective performance measures – in terms of percentage task success, time on task (TOT), mouse clicks (MC) and mouse movements (MM) 2. whether there was any correlation between the expert and novice for tasks 3. whether the expert and novice doctors rated the usability of the EHR system

Methods

Ten novice doctors and seven expert doctors participated in the study which took place in University of Missouri Health System (UMHS, a 536 bed tertiary care academic medical hospital in Columbia, Missouri. The participants were from the areas family and internal medicine.

Participants

First year residents were classified as novice EHR users while second year resident and above were classified as expert users and had an average age of 31years.

Performance Measures

As part of the usability testing, which involved data collection via the video analysis software Morae from Techsmith, participants were required to complete a series of 19 tasks. The user performance was evaluated for: 1. Percent task success – the percentage of subtasks that were successfully completed 2. TOT - time on task which was the length of time taken to complete a task 3. MC – mouse click which was the number of times the user clicks on the mouse when completing a task 4. MM – mouse movement which was the distance, in pixels, of the navigation path by the mouse during the execution of a task System Usability Scale (SUS) Each participant was required to complete a system usability assessment of the system in the form of a 10 item Likert scale which was a subjective assessment of the EHR.

Data collection and analysis

The usability tests were completed in approximately 20 minutes and using the think aloud strategy where the participant was instructed to read out the printed instructions containing the test scenario and 19 tasks. The tests were recorded using the Morae software.

After completion of the tasks the participants were asked to complete the SUS and demographic survey. As part of the data analysis the recorded sessions were reviewed and the 19 tasks divided into subtasks to determine task success rate and highlight any usability challenges.

The Pearson statistical test was used to compare performance measures and SUS to identify any potential correlation (Ref - Van Someren, M. W., Barnard, Y. F. & Sandberg, J. A. (1994) The Think Aloud Method: A Practical Guide to Modelling Cognitive Processes. London: Academic Press.)

Results

The demographic survey provided data around each participant’s age, race, gender, length of experience with EHRs and area of specialism. The average age of the novice group was 28 years, while that of the expert group was 31years. The breakdown of the results for the other criteria is outlined but as the sample size was small so the numbers for these are not likely to be statistically relevant (ref - Barnum, C. (2003) The magic number 5: is it enough for web testing? Information Design Journal, 11, 160–170.)

The results for the performance measures for the 19tasks showed : Percentage task success rate – there was no significant difference between the novice and expert doctor groups. Both groups had the lowest success rate with task 7 adding a medication to a favorites list. TOT – there was no significant difference between the mean values for this between the two doctor groups. MC – while expert doctors completed the tasks with slightly fewer MC that novice doctors this difference was not statistically significant. MM – again while the expert group showed slightly short MM that the novice group this was not statistically significant.

Correlation analysis for both groups showed that MC and MM figures increased for more time spent completing the task but was associated with a decrease in task success rate.

The SUS results showed that there was no significant difference in the usability rating for the EHR that was given by the novice and expert doctor groups.

The Pearson correlation coefficient analysis indicated that there was no correlation between the task success rate and the participant’s perception of user friendliness of the system

Conclusion

The results indicate that while expert doctors may have longer experience with an EHR system this may not increase their proficiency with the system. The SUS results indicated that length of time of experience of EHR use did not affect the acceptance of the EHR by novice or expert doctors.

In addition the results showing the inverse relationship between increased MC and MM figures with decreased task success rate indicate that increased time did not guarantee higher task completion success which may reflect poor usability in the EHR for some tasks e.g. task 16 where doctors were not clear which option to select in the system menu for changing a medication.

The overall results indicate that there was no difference between in task performance or usability assessment between novice and seasoned doctors using the EHR

The authors made some comparisons with other usability studies that had indicated similar findings for usability assessment and proficiency with participants from nursing and other health care provider groups.

The overall conclusion was that results of this study might assist EHR vendors to improve their user interface design and be considered when improving EHR training programs for doctors to ensure improvements in doctors’ performance when using EHR systems.

Comments

The results of this study were surprising in that there was no difference in performance or user satisfaction between the two groups despite expert users having a year or more experience with using the system.

The sample size was small, something that the authors admit but cite that a review of literature indicates a minimum sample size of 10 as sufficient for an explorative usability study for identifying relevant usability problems.

Related Articles

References

</Ref>