The Power Experts

Liquid Cooling Data Center Trend

Liquid Cooling Data Center TrendLiquid Cooling includes both direct to chip and immersion cooling and is trending with data centers of all sizes as rack density increases to the point that air cooling is no longer cost effective. Big Data, AI and Edge Computing all demand lots of computing power, which generates more heat.  With increased power density comes a need for more efficient and effective cooling. If your data center keeps chasing better cooling total cost of ownership (TCO) through changing rack layouts, floor tile fan placement, perimeter and other air cooling set-ups, it might be time to consider the higher buy-in, but lower TCO of liquid cooling.

There are two leading types of liquid cooling, direct to chip and immersion, and both can be single-phase, in which the liquid does not change form, and two-phase, in which the liquid changes from liquid to gas form.

Immersion Cooling

In immersion cooling applications the liquid coolant is in direct contact with the IT electronic components. All fans within the server can be removed and all electronics are placed in the liquid. The liquid is slow to react to external changes in temperature, shielding the components from the influence of humidity and pollutants, and it gives the system the ability to operate, without fans in near silence. Immersion cooling can be implemented in single-phase or two-phase.

Single phase immersion cooling encapsulates the server in a sealed chassis and the system can be configured in rackmount or stand-alone format. In rackmount form, the electronic components are cooled by dielectric fluid passively, via conduction and natural convection, or actively as it is pumped within the servers. Heat exchangers and pumps can be located inside the server or to the side of the rack where heat is transferred from the dielectric liquid to the water loop. In the tub format of immersion, single-phase cooling, also called open bath, the IT equipment is completely submerged in the fluid. In a tub, the racks are not stacked vertically, but horizontally, like a rack on it’s back. The heat within the dielectric fluid is transferred to a water loop via heat exchanger using a pump or natural convection.

Two-phase immersion cooling places the server in the liquid but the liquid changes state during the cooling process. As the fluid heats up and turns to condensation, the water circuit and heat exchanger remove the heat.

Direct to chip coolingDirect to Chip Cooling

Direct to chip cooling features coolant directed to the hotter components using a cold plate on a chip within the server. All direct to chip cooling methods are used in rackmount form. The electronic components of the IT are not in direct contact with the liquid coolant. Fans are still required to provide airflow, meaning that conventional air-cooling infrastructure is reduced, but still utilized. Water or dielectric liquid can be used as the coolant. Fluid manifolds at the back of the rack are used to distribute the fluid. The single-phase, direct-to-chip cooling fluid does not change from liquid to gas while cooling. In the two-phase version the liquid turns to gas as it cools the system and requires additional system controls. Most often, dielectric fluid is used in a two-phase system, which reduces the risk of water damage to the equipment. The dielectric vapor can be transported to a condenser outside or the heat can be ejected through a building water loop.

Why consider Liquid Cooling?

  • Rising chip and rack densities paired with lower latency requirements and the increasing use of GPUs and rising CPU power consumption all demand more efficient effective cooling.
  • The pressure to reduce energy consumption has data center managers evaluating every aspect of cooling and its impact on the budget.
  • As floor space becomes a premium in your facility, liquid cooling can allow you to add more computing power in less space. Liquid cooling can also help facilities use former grey space or non-traditional shaped space for IT.
  • Harsh environments can still house IT when you use immersive liquid cooling that isolates servers from the environment. Without fans there is inherent protection against dust and debris.
  • Water is no longer something that can be taken for granted and liquid cooling greatly reduces the need for water, and the added budget hit.

Retrofits

Direct to chip cooling is ideal for retrofits with existing air-cooled systems.  With relatively minor changes, and the addition of cold plates and tubing, a rackmount system can be converted to liquid to the chip cooling. A dielectric fluid wouldn’t cause much damage if it leaks and facilities can retrofit one rack at a time.  There would be a learning curve as staff figures out how to maintain the new system.

Implementing immersion cooling is a bigger upfront cost to add the bath or tub in which the chassis is immersed. But when you add a tub-based immersion system, it does not have to be housed in a data center and can operate in grey space or harsh environments. And because it is quiet, it is a great option for areas where a noisy cooling system would not be welcome. A tub-based system can work in a data center alongside existing cooling equipment.

At some point, the benefits of liquid cooling, such as low water usage, higher efficiency, reduced noise, and ability to be used almost anywhere, may be worth the higher buy-in cost of retrofitting with direct to chip or an immersion cooling system.

InRow CoolingMaximizing Cooling for Existing Systems

Adding liquid cooling for retrofits may currently be cost prohibitive for some facilities. In those cases, hot aisle/cold aisle layout design for servers can conserve energy and lower cooling costs by better managing air flow.

A hot aisle/cold cooling format involves lining up server racks in alternating rows, with cold air intakes facing one way and hot air exhausts facing the other. The rows with the front of the racks are facing the cold aisles and the heated exhausts from the back of the racks face the hot aisle.  Usually the cold aisles face air conditioner output ducts while warm exhaust air flow toward air conditioner return ducts.

Hot aisle/cold aisle arrangements lower cooling costs by better managing airflow, requiring lower fan speeds and increasing the use of air-side or water-side economizers. Row cooling can replace a mix of cooling systems, such as perimeter and row when used in the proper layout. When utilized in an ideal architecture, row cooling can reduce hot spots and cooling costs, protect your equipment and free up space for more racks.

Once the ideal rack cooling layout has been determined, ideally through modeling software, perimeter cooling can be removed and more racks can be added to the same existing floor space, all while improving the efficiency of the data center. When thermal containment is added, overall cooling efficiency is increased, and costs drop.

Thermal Containment for hot/cold aisles is essentially what it sounds like; doors and a ceiling are added to aisles to contain the air within that aisle and prevent it from mixing with air of a different temperature. If those different temperature air masses mix, the cooling system must work harder to maintain the proper temperature for the equipment. Cold aisle containment works to contain the conditioned air that the cooling system produces, keeping the racks cool. In hot aisle containment, the hot aisle is contained so that the precision air conditioning units only receive hot air to expel from the aisles.

aisle containmentThe APC InRow Cooling line provides for hot-aisle/cold-aisle cooling layout with either an open architecture or with containment. With more control over air distribution through a shorter path between the hot air and the heat removal, this cooling method is efficient and predictable and can easily be implemented into existing data centers. The APC by Schneider Electric Uniflair Chilled Water InRow and the Uniflair Direct Expansion InRow cooling systems use intelligent controls to meet the load and improve predictability and efficiency.

The APC Chilled Water InRow system closely couples the cooling with the heat source while intelligent controls actively adjust to match the load. Available in various configurations, these units are ideal for meeting the diverse requirements of medium to large data center.

The APC Direct Expansion InRow system uses air, water, and glycol for cooling network closets, server rooms, and data centers. The InRow Direct Expansion family closely couples the cooling with the heat source and is available in air-cooled, self-contained and fluid cooled configurations. IT operators looking to improve efficiency or deploy higher density equipment will benefit from the modular design.

Benefits of Hot-Aisle/Cold-Aisle Layout for Cooling

  • Cooling systems can be set to a higher supply temperature (thereby saving energy and increasing cooling capacity) and still supply the load with safe operating temperatures.
  • Elimination of hot spots. Containment allows cooling unit supply air to reach the front of IT equipment without mixing with hot air. This means that the temperature of the supply air at the cooling unit is the same as the IT inlet air temperature – i.e., uniform IT inlet air temperatures. When no mixing occurs, the supply air temperature can be increased without risk of hot spots while still gaining economizer mode hours.
  • Economizer mode hours are increased. When outdoor temperature is lower than indoor temperature, the cooling system compressors don’t need to work to reject heat to the outdoors. Increasing the set point temperature on cooling systems results in a larger number of hours that the cooling system can turn off its compressors and save energy.
  • Humidification / dehumidification costs are reduced. By eliminating mixing between hot and cold air, the cooling system’s supply air temperatures can be increased, allowing the cooling system to operate above the dew point temperature. When supplying air above the dew point, no humidity is removed from the air. If no humidity is removed, adding humidity is not required, saving energy and water.
  • Better overall physical infrastructure utilization, which enables right-sizing – which, in turn, results in equipment running at higher efficiencies. Larger oversized equipment experiences larger fixed losses than right-sized equipment. However, oversizing is necessary for traditional cooling because extra fan power is required both to overcome underfloor obstructions and to pressurize the raised-floor plenum.

If a data center adds in-row cooling with thermal containment, in most cases, the entire room can be cooled without perimeter cooling. There are many advantages of cooling the entire room with only row coolers compared to a mix of row and perimeter coolers;

  • Allows elimination of the raised floor – Perimeter coolers generally distribute air to IT equipment via a raised floor plenum. The cost associated with installing and maintaining the raised floor can be eliminated when row coolers are used to cool the entire space.
  • No fighting between row and perimeter coolers – Both temperature and humidity fighting can occur when two distinct cooling architectures are deployed in a room. This leads to ineffective operation and increased energy bills. When only row coolers are used, their operation is coordinated to avoid this.
  • Lower capital expense – Over-conservative designing of a data center with row and perimeter coolers results in significant capital expense waste. Up to half of this expense can be eliminated by relying on the row coolers to cool the entire data center load.
  • Simpler redundancy – If a data center has a redundancy requirement (i.e. N+1, N+2, or 2N), further capital expense could be avoided by eliminating the need for redundant coolers for the perimeter units.
  • Lower energy costs – Over-provisioning of coolers results in added energy expense, especially if the unnecessary coolers have fixed-speed fans, which is common in perimeter units supplying a raised floor.
  • Lower maintenance costs – Using only row coolers means no additional maintenance contracts for perimeter coolers, reducing the operating expenses in the data center.
  • Less vendor interactions – Often times when hybrid designs are considered, designers look to multiple vendors to supply the different systems. Using additional vendors means additional complexity in maintaining the data center operation.

For more information, see our whitepaper about Data Center Cooling Trends for Small to Medium Data Centers.

For more information about Liquid Cooling and Hot Aisle/Cold Aisle Cooling,
call 800-876-9373 or email sales@power-solutions.com.

(Information from Schneider Electric White Paper 135 “Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency” and White Paper 275 “Five Reasons to Adopt Liquid Cooling”)