Data Center Planning and Design Overview for Healthcare Organizations

From Clinfowiki
Jump to: navigation, search

Clinical information system planning includes data centers and data center planning. Data centers are centralized locations housing the core computing infrastructure for an organization, institution or company. For locations with a campus-setting, or having multiple buildings, is it is not uncommon to have multiple data centers. Data centers are more common in hospitals or large healthcare organization settings, than small physician offices. HIPAA 164.308(a)(7) requires a detailed contingency plan for electronic protected health information (EPHI). [13]

Data Center Equipment Components

The main components of a typical data center are:

1. Server racks: equipment racks housing servers.[1] The metric term “rack unit” (abbreviated: RU or U) is a commonly used industry-standard metric used to describe of server rack sizes. The rack unit metric eliminates the need for unit conversion between the metric system and the US standard measurement system in data center design. The typical server rack can hold 42 rack-units (notation: 42RU or 42U). For each rack, the standard height is approximately 6ft and the width is commonly 19in (alternatively 23in). Half a sever rack’s height (notation: 18U to 22U) is 3ft. A typical data center can house dozens of server racks with dozens of servers installed on each rack.

2. Servers are machines used to share data/information as requested by clients, following the Client-Server model concept in the known in the field of Computer Science. Traditional servers are installed on server racks, where connections are made to the servers. Emerging technologies in recent years are driving companies, institutions, and organizations towards installation of ‘blade servers,’ which are accompanied with custom blade enclosure racks. Blade servers are servers with a modular design made for space and complex interconnections reduction. Blade servers don’t possess all of the components of a traditional server; however, can still perform practically the same functions. Each blade server can be electrically rated up to a few thousand Watts of power (more than an average residential home’s power usage).[2] Servers are high-density electrical loads generating high volumes of heat, hence the need for data centers to require significant capacities of air conditioning equipment.

3. Cluster switches are used to switch between servers. In addition, they were created for additional flexibility in data center design and maintenance, allowing server installation or removal ease (i.e. scalability).

4. Routers are connected to the local area network (LAN) and wide area network (WAN). Routers send out data packets to clients between the network and internet, selecting the fastest path for information to travel.

5. Hard drives and other miscellaneous computer hardware for data storage and data center function.

6. Electrical facilities equipment. Electrical circuit breaker panels feeding electricity to servers are most times, housed near or in the data center for connection access ease. Circuit breaker panels are sourced from transformers and larger electrical panels that are electrically fed by normal power systems, uninterrupted power systems and standby power systems.

7. Air conditioning equipment specifically applied for data center cooling are called CRAC units (computer room air conditioning units). Their main purpose is to keep the data center cool. These systems are housed in facility rooms located nearby, with ventilation in the data center.

8. Security and controls equipment are elaborate and are often required for physical access into certain areas of the data center (or the data center itself). These systems’ purpose includes network access control, monitoring, and system alarming.

Data Center Layout Description

The layout of a data center includes all or some of the following: facilities rooms (for air handling-related equipment, generators, UPS systems, and other electrical equipment), fire suppression area, a secure-access area, a command center/operations control room, networking room, back-up/media room, equipment storage room, the data center itself, and staff offices.[3]

Data Center Capacity Planning and Operation

Transfer of medical records from paper to electronic records increased data transfer, storage and usage. The effects are increasing demands for data center capacity. Large healthcare organizations, hospitals, laboratories, and other critical-to-function facilities require space to expand data center real estate. Institutions facing data center expansion needs also need additional utility capacities in electrical, water (for CRAC units), telecommunications, fiber optics networking, and security/controls management systems (for physical security, information security, emergency/fire alarming, and notifications). Construction and maintenance of such facilities results in high investment costs. Efforts to reduce costs and increase efficiency are highlighted across industries, including the healthcare industry. The department of Health and Human Services (HHS) executed its reduction in data center energy consumption by 2015. HHS’s program initiated in 2007.[4]

Power Usage Effectiveness

Reduction of energy consumption and monitoring of energy efficiency of a given data center is determined by the industry-standard measure called Power Usage Effectiveness (PUE). The PUE rating is defined as:

PUE = (Facility Power)/(IT Equipment Power) [5]

The PUE rating is a unitless value. The perfect PUE rating, i.e. the most energy efficient data center, has a value of 1.0.

Data Center Standards Classification and Specification

Besides following specific standards and guidelines outlined by NFPA, IEEE, IEC, and other nationally/internationally recognized standards and regulations organizations, in 2005, ANSI developed a standard (amended in 2010), defining the four tiers of data centers. The tiers are mainly describe the level of redundancy built-in to a given data center’s design. The tiers range from no redundancy (Tier 1) to fully redundant systems, with dual supplies sourced from electrical and mechanical facilities energy supplies (Tier 4).

Correspondingly, Uptime Institute developed a performance-based specification for data centers based on the Tier levels outlined. The metrics listed by the institute, is based on time of data unavailability for an undesired data center energy supply downtime. The ranges determined by the Uptime Institute are from over 26 minutes per year for Tier 4 to over 1,700 minutes per year for Tier 1.

Furthermore, TIA-942 is a guideline developed as a prescriptive-based specification detailing design and construction guidelines for data centers.

Design for Energy Redundancy

Normal power (electrical grid power) systems are used wherever possible to supply energy where a power outage is manageable or not detrimental. Standby systems (abbreviation: SPS, typically, diesel-powered generators) are typically used for loads needed during emergency and critical conditions, such as air handlers for cooling, servers and other computer hardware, emergency lighting, security, alarming, etc. Considering that generators require seconds or more of time to start-up and supply power, in addition to SPS, critical data center loads that has to be available during this transition (from normal grid power to standby power) are connected to an uninterruptible power system. The abbreviated term, UPS, is commonly an AC/DC converter system with fast-switching capabilities to provide a seamless power transition in case of a power system event or other power supply changes (such as maintenance or upgrades) from the electrical grid to the generator. The system is supplied by a summed electrical bank of up to dozens of batteries.[6] Air handlers and other non-electrical facilities equipment for a Tier 4 data center, may also have a duplicate set of identical equipment. The redundant supply system’s purpose is to maintain continuous operation 24 hours/7 days a week without discontinuance for maintenance, or incidents, such as power system grid failure or component failure. Furthermore, data centers may have business continuity planning (BCP) for fires and natural disasters, where duplicates of data and information are stored in far-away locations to ensure all is not lost under such events.[7]

Green-Driven Future of Data Centers

The future of data centers for hospital facilities include incorporating green energy sources (such as wind and solar).[8] Data center managers in hospitals have the challenge of reducing power consumption to reduce operating costs for hospital facilities. St. Charles Health System in Oregon, for example, was able to reduce its power consumption by 30% and achieved a return on investment (ROI) of $7.1 million. Their efforts also enabled the accomplishment of the Leadership in Energy and Environmental Design (LEED) certification award as a Tier 3 data center.[9] Oregon Health and Sciences University (OHSU) was able to accomplish a 1.13 PUE rating, which is 34% more efficient than the average PUE rating of 1.7 of all reporting data centers, upon going live and operating in July 2014.[10]






5. Barroso L., Clidaras, J., and Hoelzle U. Datacenter as a Computer: An Introduction to the Design of a Warehouse-Scale Machines, Morgan and Claypool Publishers, Madison, Wisconsin, 2013, pp. 6-68.







12. Liu Y., Muppala J., Veeraraghavan M., Lin D., and Hamdi M. Data Center Networks, Topologies, Architectures and Fault-Tolerance Characteristics, Springer Cham, Heidelberg, New York, 2013.