Immersion Cooling for data centers: An exotic inevitability?

Immersion Cooling for data centers: An exotic inevitability?


Modern data centers (DCs) use a variety of cooling system types. Most DCs today still use air cooling as the baseline, with chilled air circulated through racks and hot air exhausted out, but this method struggles with modern high-power CPUs and GPUs. Starting with Nvidia’s Hopper and expanding with Blackwell, operators are moving toward liquid cooling, specifically cold plate and direct-to-chip solutions, which can be integrated with existing air-cooling infrastructure.

However, while more advanced systems like immersion cooling exist, they see limited adoption despite claims of explosive benefits in performance density, overall cost, and efficiency. However, as next generations of AI accelerators are set to increase power consumption, immersion cooling may become inevitable three or four years down the road. But is the industry ready?

Data centers are getting hotter

AI data centers dissipate heat using a combination of airflow, liquid circulation, and heat exchange systems that move the thermal load outside the facility. The basic principle is to move heat away from hot chips (CPUs, GPUs, switches) into a medium — air, water, or a dielectric fluid (such as water glycol) — and then carry that heat to cooling towers, chillers, or evaporative units where it is released into the atmosphere.

(Image credit: Google)

In air-cooled DCs, servers push hot exhaust air into return plenums of HVACs, which is then cooled by chillers or evaporative cooling towers before being recirculated, which is cheap and easy to implement, but is insufficient for AI data centers that use power-hungry hardware such as Nvidia’s Blackwell GPUs (which are some of the most power hungry processors in the industry).



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *