Vertiv provides power and cooling for AI and accelerated computing in the data room

11 Feb.,2025

Providing power and cooling for AI and accelerated computing in the data room, artificial intelligence is here, it stays here. "Every industry will become a technology industry," said Huang Renxun, founder and CEO of NVIDIA

 

 

 

Providing power and cooling for AI and accelerated computing in the data room, artificial intelligence is here, it stays here. NVIDIA founder and CEO Huang Renxun said, "Every industry will become a technology industry.". The use cases of AI are almost infinite, from medical breakthroughs to high-precision fraud prevention. Artificial intelligence is already changing our lives, just as it is changing every industry. It is also starting to fundamentally change data center infrastructure.

 

By utilizing Vertiv's advanced power supply and cooling technology, the efficiency of AI and accelerated computing in the data room can be improved. Artificial intelligence (AI) is here, it stays here. NVIDIA founder and CEO Huang Renxun said, "Every industry will become a technology industry." The use cases of AI are almost infinite, from medical breakthroughs to high-precision fraud prevention. Artificial intelligence is already changing our lives, just as it is changing every industry. It is also starting to fundamentally change data center infrastructure. Typical IT racks used for running 5-10 kW workloads and racks running loads above 20 kW are considered high-density, and this phenomenon is rare outside of very specific applications, with a narrow coverage range. IT is utilizing GPU acceleration to support the computing needs of AI models, which require approximately five times more power and cooling capacity than traditional servers in the same space. Mark Zuckerberg announced that by the end of 2024, Meta will spend billions of dollars deploying 350000 H100 GPUs from NVIDIA. The rack density is 40 kW per rack and is currently at the lower end required to facilitate AI deployment. The rack density exceeds 100 kW per rack and will become commonplace and large-scale in the near future.

 

Transition to high-density

 

The transition to accelerated computing will not occur overnight. Data center and server room designers must find ways to make power and cooling infrastructure forward-looking and consider the future growth of their workloads. Getting sufficient power to each rack requires upgrading from the grid to the rack. Specifically, in the blank space, this may mean high ampere buses and high-density rack mounted PDUs. In order to suppress the large amount of heat generated by hardware running AI workloads, two [g3] liquid cooling technologies [/g3] are becoming the main choices:

 

1. Direct chip liquid cooling: The cold plate is located at the top of the heating component (usually CPU and GPU chips) to dissipate heat. The pumped single-phase or two-phase fluid absorbs heat from the cold plate and sends it out of the data center to exchange heat with the chip, rather than the fluid. This can remove approximately 70-75% of the heat generated by the equipment in the rack, leaving 25-30% of the air cooling system that must be removed.

2. Rear door heat exchanger: Passive or active heat exchangers use heat exchange coils instead of the back door of IT racks, and the fluid absorbs the heat generated in the rack through the heat exchange coils. These systems are usually combined with other cooling systems as a strategy or transitional design to maintain indoor neutrality, beginning the journey of liquid cooling.

 

Although the cooling capacity of direct chip liquid cooling is much higher than that of air cooling, it is important to note that the cold plate still cannot capture too much heat. This heat will be discharged into the data room unless it is accommodated and removed through other means such as a rear door heat exchanger or indoor air cooling.

 

An AI entry-level kit suitable for renovation and new construction

 

Power and cooling are becoming an integral part of IT solution design in data rooms, blurring the boundaries between IT and facility teams. In terms of design, deployment, and operation, this adds a high degree of complexity.

 

Partnership and full solution expertise are the primary requirements for a smooth transition towards higher density. In order to simplify the transition to high-density, Vertiv has launched a series of optimized designs, including power and cooling technologies, capable of supporting workloads of up to 100 kW per rack in various deployment configurations.

 

 

Design Summary

rack Density/rack radiating
     

from server

From room

Training model pilot, large-scale edge reasoning

Minimum modification of small HPC 1 70 kW Water/ethylene glycol air
Small scale HPC retrofit of chilled water system 1 100 kW Water/ethylene glycol Water/ethylene glycol

Enterprise centralized training, data center AI corner

Cost optimization and transformation of medium-sized HPC 3 100 kW Water/ethylene glycol refrigerant
Medium sized HPC with higher heat capture capability 4 100 kW Water/ethylene glycol+air Water/ethylene glycol
Practical transformation of medium-sized HPC, suitable for air-cooled machine rooms 5 40 kW air refrigerant
Medium sized HPC 5 100 kW Water/ethylene glycol Water/ethylene glycol

Large AI factory

Large HPC maintains room neutrality 12 100 kW Water/ethylene glycol+air Water/ethylene glycol
Large scale HPC construction 14 100 kW Water/ethylene glycol Water/ethylene glycol

 

 

These designs provide multiple paths for system integrators, hosting providers, cloud service providers, or enterprise users to achieve future data centers. Each specific facility has subtle differences, and the number and density of racks are determined by IT equipment selection. Therefore, this series of designs provides an intuitive way to clearly scale down to the basic design and customize it completely according to deployment requirements.

 

When transforming or reusing the existing environment of AI, our optimized design helps minimize interference with existing workloads by maximizing the use of available cooling infrastructure and cooling capabilities. For example, we can integrate direct chip based liquid cooling with rear door heat exchangers to maintain indoor neutral cooling solutions. In this case, the rear door heat exchanger prevents excess heat from escaping into the room. For air cooling facilities that wish to add liquid cooling equipment without making any modifications to the site itself, we offer a liquid air design option. The same strategy can be deployed in a single rack, continuously, or on a large scale in HPC deployments. For multi rack designs, we also include high current bus ducts and high-density rack mounted PDUs to allocate power to each rack.

 

These options are compatible with a range of different cooling options and can be used in conjunction with liquid cooling. This establishes a clean and cost-effective transition path to high-density liquid cooling without interrupting other workloads in the data room.

 

Although many facilities are not designed for high-density systems, Vertiv has extensive experience in helping clients develop deployment plans to smoothly transition to high-density AI and HPC.