Interface Design of the EHR

From Clinfowiki
Jump to: navigation, search

The interface of Electronic Health Records has major implications for the retrieval of information, the users’ workflow(s), patient safety and the acceptance of the EHR itself.


The EHR has a variety of goals--memory, computation, decision support, and collaboration--each of which is applied to care providers' various tasks within the EHR, including reviewing patient history, conducting a patient assessment, clinical decision-making, development of a treatment plan, ordering additional services, prescribing medication, and documentation of the visit.[1]

This demanding set of functionalities requires an interface that the clinician can effectively use, or capabilities of software will not meaningfully affect patient care. Usability is “...effectiveness, efficiency and satisfaction with which the intended users can achieve their tasks in the intended context of product use.” [2]

With such complicated functional demands, development of an interface that allows the clinician to access information and activate functionality has consistently challenged the capabilities of contemporary computing technology and required incorporation of novel methods of study.


Hueristics for usability were identified and codified by Molich and Nielsen in 1990. These principles included simple and natural language, “speaking the user’s language,” memory load minimization, design consistency, appropriate feedback and error messages, availability of shortcuts, and error prevention.[3]

Early EHRs were based on menu selection and an alphanumeric interface, which had the major limitations of significant time requirements to select the necessary information from multiple sequential menus and display space for text that was insufficient to conveniently display the amount of information required for most clinical decisions. The cognitive load to navigate the menus also interfered with short-term memory. The storage of information in menus required clinicians to actively look for each piece of information, whereas the paper record allowed for passive discovery of information. Citing a need to make the then-termed “computerized medical record” “smooth and efficient for routine use,” subsequent innovations in graphical user interfaces (GUIs) relied on the metaphor of pages in the prior paper chart to organize information and facilitate navigation. This metaphor included imitating the amount of paper in a stack in any given topic to mimic the physical cue from a paper chart, and modeled the text display area to be similar to the size of a standard sheet of paper. Animated page-turning further oriented the user and provided feedback for their use of the interface. The layout of information was also organized for a fixed-size monitor at a dedicated workstation. Additional innovations expanding upon the paper chart involved rudimentary color-coding by changing the background of the window based on the status of the information, and index highlighting for records that are related.[4]

By the mid-90s, the need to evaluate human-computer interaction and optimize the interface of clinical records for reducing cognitive load was recognized. Vendors appreciated the need to study human-computer interaction from the point of view of cognitive psychology and ethnography. These included ‘think-aloud’ protocols where researchers would observe users describe their thought processes while trying to use software, studies asking users to interpret and explain the state of the software from a static picture, and analysis of session logs. Ethnographic studies included the recording and studying of residents presenting to attending physicians and noting the types of questions most likely to be asked. Designers recognized that there may be mismatches between the developer’s model of how the software was expected to work and the user’s model, for instance that the “desktop” metaphor that was a successful innovation for GUI design in the prior decade was not as well suited to the clinical environment. Efforts continued to attempt to replicate the paper environment to which clinicians were accustomed, including attempting to incorporate pen-based technology.[5]

With the increasing adoption of the World Wide Web, by 1998 designers of healthcare interfaces became more aware of the importance of creating systems that could accommodate multiple users working simultaneously and separated across time. Insights from design models developed for World Wide Web were adapted for the clinical environment, as information retrieval could be conceptualized as a navigation effort between nodes and across links,[1] whereas in prior designs users had not found hyperlinked content to be intuitive.[4] Awareness of the potential for the incorporation of multimedia information was growing at this period, although the practical capability of multimedia support lagged. Patients also demonstrated an interest in more access to their clinical records in a manner similar to how the general public had become accustomed to looking up information on the web. It became clear that the requirements of interface design for the clinician represented a different mental model to the patient’s conception of their medical history and preferred patterns to retrieve information. As rudimentary methods for evaluation of interfaces began to mature, researchers realized that the artificial setting of the laboratory and the creation of mock scenarios for testing were inadequate to capture the needs of the user, and researchers advocated for iterative processes that would alternate between simulated environments for hypothesis testing and real-world observation.[6]

As the functionality of EHRs increased throughout the early 2000s, a frequent complaint of HER interfaces was the requirement to switch between multiple different screens to obtain necessary information, interrupting workflows. A lack of context-specific display of information relevant to a practitioner for a given task added to the navigation burden. Direct navigation to high-value sections of the EHR was often unavailable, so users complained of needing to return to different parts of the application to backtrack and then re-navigate to the next part of their workflow. For example, an interface that had no easy way to transition from reviewing results to writing a note about those results wasted clinicians’ time. Pop-ups that theoretically allowed the user to stay on one page while working on a new window added confusion as to which window was active, and required cognitive resources to keep track of many open windows. Improvements in screen resolution allowed designers to condense more information on each screen, while technical display limitations in the clinical environment were a significant obstacle. A major focus of usability studies at this time was specific to certain prototypes of clinical devices, workstations, and portable computing technology.[7] This era also showed a growing body of research finding risks of errors due to interface issues such as computerized provider order entry (CPOE) including risks of patient overdoses.[3]

Even after a decade of study in EHR interfaces, usability was still considered an under-studied component of health technology in 2009, when a position paper by AMIA cited its subjectivity as a major obstacle for rigorous evaluation. The investment from vendors was also lacking, while institutional reluctance to pay for adequate implementation support limited data gathering. Standardizing interface elements among vendors became appreciated as a goal, and efforts included the Microsoft Health Common User Interface in 2008, which attempted to collect validated designs and describe their characteristics and the contexts in which they were appropriate.[2] However, studies in subsequent years continued to note the interface as a major barrier to EHR acceptance, including problems with data display, navigation, workflow, and cognitive overload, and all major EHR usability studies continued to show failures adhering to Molich and Nielsen’s usability heuristics even well into the second decade of the 21st century. Workarounds for the persistent inefficiency in EHRs continued to contribute to error, notably inaccuracies carried through in copy/paste, selecting incorrect orders from drop-downs, and selecting incorrect patients or providers through text-matching. Recent interfaces have improved wrong-patient errors through incorporating patient photographs. The incorporation of templates into EHRs led to errors such as persistence of defaults when clinically inappropriate, or entering information intended for an adjacent field. Innovations such as copy-paste tracking and color-coding for template-derived information have been recommended in response to these issues.[3]

EHR Interface principles


Physicians engage in rule-based decision-making based on procedural knowledge.[1]

Competition among multiple pieces of information deteriorates working memory.

Frustration with interacting with an interface is itself a drain on cognitive resources.

Requiring multiple navigation steps and switching between views can interrupt cognitive processing of information, or loss of overview.[7]

Incomplete or inconsistent display of information interacts with varied physician knowledge to create variables in decision-making.[3]


EHRs must accommodate collaborative work between team members of different roles and with different scopes of practice.

Clinical decision support should be incorporated in a manner that is relevant and timely for the provider.

EHRs must support patient-centered care and adapt to the circumstances of the patient.[1]

Data entry must allow for inclusion of the nuances of the patient’s narrative.[8]

Functions of the application, such as ordering, should be easily accessible from the display of information that would cause a clinician to want to perform an action.

The source of data must be readily apparent for critical evaluation.

Privacy and security of data must be ensured.[1]

Artifacts of the database structure or technical components of the EHR should not constrain the user interface or require the physician to remember minutiae of the technical limitations.[2]

The EHR must have the capacity for error prevention, in medical content and errors in navigation or use.[3]


Users require instant and reliable feedback from the application to verify that their actions have been properly recorded.

Users are more likely to maximize the effectiveness of an application if they feel they have adequate control over it and can customize it to their needs.[1]

Modal windows that require the content within the window to be addressed before any other actions are taken can cause inefficiency in a clinician’s workflow.[2]


Speed is a critical feature of EHR performance.[7]

Providers generally prefer the simultaneous display of large amounts of information rather than scrolling. However, the desire for efficient display competes with the tendency of the user to be overwhelmed and distracted by large amounts of data.

Information presented must be relevant and context-specific, which in practice varies between types of providers.[5]

A task-centered design approach allows designers to direct the user toward action-oriented sections of an application.[7]


Displays of similar items and tasks should be consistent across all views of an application.[1]

The manner in which data is presented should be consistent with its meaning, termed semantic representation.[5]

A visual hierarchy refers to grouping items of similar type together, and communicating content relationships through placement.

Color usage should not be distracting, and highlights or selections should not reduce contrast or interfere with legibility of text.

Screen elements that are too close together lead to slower serial processing.

Placing items too close together makes clinicians more likely to click on the wrong element, which can lead to slower performance and frustration if recognized, or medical errors if unrecognized.[7]

Display must be conducive to physicians “skimming” content for necessary information.

Abbreviations and acronyms should only be used in the EHR interface if they are commonly understandable to all providers using it.[3]


  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 Armijo D, McDonnell C, Werner K. Electronic health record usability: evaluation and use case framework. (Prepared by James Bell Associates, under Contract No. 290-07-10073T). Rockville, MD: Agency for Healthcare Research and Quality. October 2009. AHRQ Publication No. 09(10)-0091-1-EF.
  2. 2.0 2.1 2.2 2.3 Armijo D, McDonnell C, Werner K. Electronic health record usability: interface design considerations. (Prepared by James Bell Associates, under Contract No. 2901-07-10073T). Rockville, MD: Agency for Healthcare Research and Quality. October 2009. AHRQ Publication No. 09(10)-0091-2-EF.
  3. 3.0 3.1 3.2 3.3 3.4 3.5 Zahabi, M., Kaber, D. B. & Swangnetr, M. (2015). Usability and Safety in Electronic Medical Records Interface Design: A Review of Recent Literature and Guideline Formulation. Human Factors, 57, 805-834.
  4. 4.0 4.1 Nygren E, Johnson M, Henriksson P. Reading the medical record. II. Design of a human-computer interface for basic reading of computerized medical records. Comput Methods Programs Biomed, 1992 Sep-Oct;39(1-2), 13-25.
  5. 5.0 5.1 5.2 Tang, PC & Patel, VL. Major issues in user interface design for health professional workstations: summary and recommendations. International Journal of Bio-Medical Computing. Volume 34, Issues 1–4, January 1994, Pages 139-148.
  6. Patel, V & Kushniruk, A. Interface design for health care environments: The role of cognitive science. Proceedings / AMIA Annual Symposium, 1998. p. 29-37.
  7. 7.0 7.1 7.2 7.3 7.4 Rose, AF, Schnipper, JL, Park, ER, Poon, EG, Li Q, Middleton, B. Using qualitative studies to improve the usability of an EMR. Journal of Biomedical Informatics, Volume 38, Issue 1, 2005. p. 51-60.
  8. Craig, D., & Farrell, G. (2010). Designing a Physician-friendly Interface for an Electronic Medical Record System. HEALTHINF, 2010.

Submitted by Amelia Drace