TechnologyFebruary 16, 2026

Data Center Cooling Types Explained

Cooling is the unsung hero of data center operations — and for AI workloads, it's often the make-or-break factor. A single NVIDIA H100 GPU generates up to 700W of heat, and a rack packed with eight of them can produce over 40 kW of thermal output. Traditional cooling can't handle that. Understanding the different cooling technologies available is essential for anyone deploying AI infrastructure.

Why Cooling Matters More Than Ever

The evolution from general-purpose computing to AI has fundamentally changed cooling requirements. Traditional servers draw 500W-1kW each, with racks averaging 5-15 kW. Modern GPU servers draw 5-10 kW each, with racks reaching 40-100+ kW. This 10x increase in heat density makes cooling technology a primary selection criterion when choosing an AI data center.

Inadequate cooling doesn't just risk hardware failure — it causes GPU throttling, reducing performance by 20-40% as chips automatically downclock to avoid thermal damage. For AI training runs that cost thousands of dollars per hour, throttling wastes money directly.

Air Cooling

How It Works

Air cooling uses fans to move cool air across server components, absorbing heat and exhausting it into the data center environment. Computer Room Air Conditioning (CRAC) or Computer Room Air Handler (CRAH) units then remove heat from the room air.

Variations

  • Raised floor / hot aisle-cold aisle: The traditional approach. Cool air is delivered through a raised floor into cold aisles, passes through servers, and exits into hot aisles. Containment curtains or panels improve efficiency.
  • In-row cooling: Cooling units placed between racks within the row, reducing the distance heat must travel. More efficient than room-level cooling for moderate densities.
  • Overhead cooling: Ceiling-mounted units that deliver cool air directly above racks. Common in newer builds without raised floors.

Capacity and Limitations

Air cooling can practically support 15-25 kW per rack. Beyond that, the volume of air required becomes impractical — fans consume excessive power, and the temperature differential between supply and return air narrows, reducing efficiency. PUE for air-cooled facilities typically ranges from 1.3 to 1.6.

Best For

Traditional IT workloads, inference servers at moderate density, edge deployments, and facilities with existing air-cooled infrastructure where retrofitting liquid cooling isn't cost-effective.

Rear-Door Heat Exchangers (RDHx)

How It Works

A rear-door heat exchanger replaces the standard rear door of a server cabinet with a liquid-cooled heat exchanger. Hot exhaust air from servers passes through the heat exchanger, which absorbs heat into a chilled water loop. The cooled air exits the cabinet at or near room temperature.

Advantages

  • Retrofits onto existing racks without modifying servers
  • Can handle 30-40 kW per rack when combined with room-level cooling
  • No changes to server hardware or warranty implications
  • Relatively low cost ($3,000-8,000 per door)

Limitations

  • Still relies on internal server fans to push air through the heat exchanger
  • Maximum practical capacity around 40 kW — insufficient for the highest density GPU racks
  • Requires chilled water infrastructure in the facility
  • Adds weight to the rack (50-100 lbs per door)

Best For

Mid-density GPU deployments (20-40 kW/rack), facilities transitioning from air to liquid cooling, and organizations that need GPU hosting without modifying their server hardware.

Direct-to-Chip Liquid Cooling

How It Works

Cold plates are mounted directly on CPUs and GPUs, with liquid coolant circulated through them to absorb heat at the source. The heated liquid is pumped to a heat exchanger or cooling distribution unit (CDU) where heat is rejected to a facility water loop or dry cooler.

Advantages

  • Handles 60-100+ kW per rack efficiently
  • Removes 70-80% of server heat through liquid, reducing air cooling load dramatically
  • Enables higher GPU clock speeds by maintaining lower junction temperatures
  • Lower PUE (1.1-1.2 typical) compared to air cooling
  • Quieter operation — reduced fan speeds

Limitations

  • Requires compatible server hardware (NVIDIA DGX, HPE Cray, etc.)
  • Facility must have liquid cooling distribution infrastructure
  • Higher initial capital cost
  • Maintenance requires trained technicians
  • Leak risk (mitigated by modern designs using low-pressure loops and leak detection)

Best For

Large-scale AI training clusters, H100/H200 GPU deployments, high-performance computing, and any workload exceeding 40 kW per rack.

Immersion Cooling

How It Works

Servers are fully submerged in a tank of dielectric (non-conductive) fluid. Heat transfers directly from all components to the fluid. In single-phase immersion, the fluid remains liquid and is circulated to heat exchangers. In two-phase immersion, the fluid boils at a low temperature, and the vapor is condensed and returned to the tank.

Advantages

  • Handles virtually unlimited heat density — 100+ kW per tank easily
  • Eliminates all fans from servers, reducing component failure rates
  • PUE approaching 1.03-1.05 — near theoretical minimum
  • Protects components from dust, moisture, and corrosion
  • Extends hardware lifespan by maintaining consistent, optimal temperatures
  • Enables waste heat recovery for building heating or industrial processes

Limitations

  • Requires specially designed tanks and fluid management systems
  • Dielectric fluid is expensive ($10-30+ per liter)
  • Hardware maintenance requires draining or extracting servers from fluid
  • Not all server manufacturers support immersion deployment
  • Limited colocation providers offer immersion today

Best For

Maximum density deployments, sustainability-focused organizations, hot climate locations like Phoenix or Texas, and facilities prioritizing energy efficiency above all else.

Cooling Type Comparison

TechnologyMax kW/RackTypical PUERetrofit?Best For
Air cooling15-25 kW1.3-1.6N/ATraditional IT
Rear-door HX30-40 kW1.2-1.4YesMid-density GPU
Direct-to-chip60-100+ kW1.1-1.2PartialAI training clusters
Immersion100+ kW1.03-1.05NoMax density/efficiency

Choosing the Right Cooling for Your AI Deployment

The right cooling technology depends on your specific deployment. Consider these guidelines:

  • Under 20 kW/rack: Standard air cooling is sufficient and most widely available
  • 20-40 kW/rack: Rear-door heat exchangers offer the easiest upgrade path
  • 40-80 kW/rack: Direct-to-chip liquid cooling is the standard for modern GPU clusters
  • 80+ kW/rack: Immersion cooling or advanced liquid cooling is required

When evaluating facilities in our directory, filter by cooling type to find data centers that match your density requirements. Most AI-ready facilities offer at least one form of liquid cooling.

Find Data Centers with Advanced Cooling

Search our directory for facilities with liquid cooling, immersion cooling, and high-density support.

Get Free Quotes →