Difference between revisions of "Determining differences in user performance between expert and novice primary care doctors when using an electronic health record (EHR)"

From Clinfowiki
Jump to: navigation, search
(Results)
Line 1: Line 1:
This is a review of the article by Clarke et al (2014) titled Determining differences in user performance between expert and novice primary care doctors when using an electronic health record (EHR).
+
This is a review of the article by Clarke et al (2014) titled Determining differences in user performance between expert and novice primary care doctors when using an electronic health record (EHR).<ref name = "Clarke">
  
 
'''Determining differences in user performance between expert and novice primary care doctors when using an electronic health record (EHR)'''
 
'''Determining differences in user performance between expert and novice primary care doctors when using an electronic health record (EHR)'''

Revision as of 11:29, 21 October 2015

This is a review of the article by Clarke et al (2014) titled Determining differences in user performance between expert and novice primary care doctors when using an electronic health record (EHR).Cite error: Closing </ref> missing for <ref> tag

Results

The demographic survey provided data around each participant’s age, race, gender, length of experience with EHRs and area of specialism. The average age of the novice group was 28 years, while that of the expert group was 31years. The breakdown of the results for the other criteria is outlined but as the sample size was small so the numbers for these are not likely to be statistically relevant [1]

The results for the performance measures for the 19tasks showed : Percentage task success rate – there was no significant difference between the novice and expert doctor groups. Both groups had the lowest success rate with task 7 adding a medication to a favorites list. TOT – there was no significant difference between the mean values for this between the two doctor groups. MC – while expert doctors completed the tasks with slightly fewer MC that novice doctors this difference was not statistically significant. MM – again while the expert group showed slightly short MM that the novice group this was not statistically significant.

Correlation analysis for both groups showed that MC and MM figures increased for more time spent completing the task but was associated with a decrease in task success rate.

The SUS results showed that there was no significant difference in the usability rating for the EHR that was given by the novice and expert doctor groups.

The Pearson correlation coefficient analysis indicated that there was no correlation between the task success rate and the participant’s perception of user friendliness of the system

Conclusion

The results indicate that while expert doctors may have longer experience with an EHR system this may not increase their proficiency with the system. The SUS results indicated that length of time of experience of EHR use did not affect the acceptance of the EHR by novice or expert doctors.

In addition the results showing the inverse relationship between increased MC and MM figures with decreased task success rate indicate that increased time did not guarantee higher task completion success which may reflect poor usability in the EHR for some tasks e.g. task 16 where doctors were not clear which option to select in the system menu for changing a medication.

The overall results indicate that there was no difference between in task performance or usability assessment between novice and seasoned doctors using the EHR

The authors made some comparisons with other usability studies that had indicated similar findings for usability assessment and proficiency with participants from nursing and other health care provider groups.

The overall conclusion was that results of this study might assist EHR vendors to improve their user interface design and be considered when improving EHR training programs for doctors to ensure improvements in doctors’ performance when using EHR systems.

Comments

The results of this study were surprising in that there was no difference in performance or user satisfaction between the two groups despite expert users having a year or more experience with using the system.

The sample size was small, something that the authors admit but cite that a review of literature indicates a minimum sample size of 10 as sufficient for an explorative usability study for identifying relevant usability problems.

Related Articles

References

  1. Barnum, C. (2003) The magic number 5: is it enough for web testing? Information Design Journal, 11, 160–170.