datacenterHawk Logo
What is data center infrastructure? A guide for new industry professionals

By Luke Smith · 2/27/2020
thumbnail

We previously talked about how a data center is much more than a traditional office or warehouse building. When one goes offline, it can cost companies millions of dollars per minute. That’s why data centers need sophisticated support systems in place - which we refer to as infrastructure.

As you wade into the industry, it’s helpful to understand data center power, cooling, and connectivity infrastructure. Check out our breakdown of each of these below.


This is part four of Data Center Fundamentals, datacenterHawk’s guide to getting up to speed on the data center market. If you’re a new participant in the industry, then this is for you. Instead of analyzing deep market trends, we’ll be covering the basics one step at a time. Be sure to subscribe to our monthly update to know when we release future topics.


If you would like to sign up for our free official Data Center Fundamentals email course, click here.

Electrical Infrastructure (Power)

Backup generators are on hand at data centers in the event of a regional power outage or disruption.

Data centers consume far more power than a typical office building or warehouse. As such, the power infrastructure is one of the most critical components.

Electricity travels along what’s called the power chain, which is how electricity gets from the utility provider all the way to the server inside the data center. A traditional power chain starts at the substation and eventually makes its way through a building transformer, a switching station, an uninterruptible power supply (UPS), a power distribution unit (PDU) and a remote power panel (RPP) before finally arriving at the racks and servers. Data centers also utilize on-site generators to power the facility if there is an interruption in the power supply from the substation.

Each step of the process has a distinct purpose, whether it be transforming the power to a usable voltage, charging backup systems, or distributing power to where it is needed. We’ll be breaking down what each component does and why it’s important in future articles.


Mechanical Infrastructure (Cooling)

Many data centers blow air underneath raised floors to keep servers cool.

Servers produce substantial heat when operating and cooling them is critical to keeping systems online.

The amount of power a data center can consume is often limited by the amount of power consumption per rack that can be kept cool, typically referred to as density. In general, the average data center can cool at densities between 5-10 kW per rack, but some can go much higher.

The most common way to cool a data center involves blowing cool air up through a raised floor, which is pictured above. In this setup, racks are placed on a raised floor with removable tiles, usually three feet above the concrete slab floor. Cool air is fed underneath the raised floor and is forced up through perforated tiles in the floor around the racks. The warmer air coming out of the servers rises up and is pulled away from the data hall, run through cool-water chillers to cool it, and fed back beneath the raised floor to cool the servers again.

While raised floor is a common solution, it isn’t always necessary.

Some data centers utilize isolation, where physical barriers are placed to direct cool air toward the servers and hot air away. It’s common to see high ceilings in newer data centers as well. By simply increasing the volume of air in a data hall, it’s easier to keep the room from getting too hot.

Another less common solution is liquid cooling. The servers are stored on racks that are submerged in a special non-conductive fluid. This method is the most efficient, enabling the data center to operate at extremely high densities and prolong the lifetime of the equipment.

In certain climates, data centers can also take advantage of “free cooling” where they use the outside air to cool the servers. Instead of taking the hot air and cooling it to be used again, they allow the heat to escape and pull in the cool air from outside. This process is, as expected, much cheaper and energy efficient than operating more man made cooling infrastructure.


Connectivity Infrastructure

A meet-me-room provides a single location for all the servers in the data center to connect to fiber providers.

A data center’s connectivity infrastructure is also important. Without it, a data center would just be a building full of computers that can’t communicate with anyone outside the building.

As data centers are the primary foundation for activities happening online, the buildings themselves need to be highly connected. Access to a variety of fiber providers connects a data center to a wide network able to provide low latency connections and reach more customers.

Fiber traditionally runs into a data center through secured “vaults” and into the building’s meet-me-room or directly to a user’s servers. A meet-me-room is a location where fiber lines from different carriers can connect and exchange traffic.


Redundancy

Given the critical nature of data center infrastructure, it isn’t sufficient to only have the systems necessary for operations. Data center users also care about the additional equipment a data center has on hand to ensure that no single system can fail and take the data center, and the users servers, offline. This measure is called redundancy.

For example, a data center may need 10 chillers to cool their servers, but will have a total of 11 chillers on-site. The extra chiller is redundant and used in the event of another chiller failing.

Redundancy is communicated by the “need” or “N” plus the number of extra systems. The example above would be considered N+1. The data center needs 10 chillers and has one extra, thus it would be labeled as N+1. If the data center above had 10 extra generators in addition to the 10 they needed to operate, their redundancy would be double their need, or 2N.

The closer to N, the less redundant a data center is.

In an N+1 scenario, a data center could lose one chiller and still operate because of the one extra chiller, but they would not have an extra available if a second chiller went down. In a 2N scenario, all of the operational chillers could break and the data center would enough to replace them all. Today, most data center providers find N+1 is sufficient to avoid downtime, though some industries require their data centers to be more redundant.

Redundancy applies to most aspects of a data center, including power supplies, generators, cooling infrastructure, and UPS systems. Some data centers have multiple power lines entering the building, or are fed from multiple substations to ensure uptime in the event a line is damaged somewhere. The same approach can be taken with fiber lines. Generators are also used as a back-up power source should the supply be interrupted.

Data centers support the internet ecosystem that more and more of the world relies on today. As such, they require robust infrastructure to ensure there’s no interruption in the services they provide.


This was part four of Data Center Fundamentals, datacenterHawk’s guide to getting up to speed on the data center market. Be sure to subscribe to our monthly update to know when we release future topics.


Focused on data center real estate?

Get instant access to market analytics. Guess less. Make better decisions.