Welcome to the Schneider Electric corporate Website

Image of Navigating Liquid Cooling Architectures for Data Centers with AI Workloads

Navigating Liquid Cooling Architectures for Data Centers with AI Workloads

Many AI servers with accelerators (e.g., GPUs) used for training LLMs (large language models) and inference workloads, generate enough heat to necessitate liquid cooling. These servers are equipped with input and output piping and require an ecosystem of manifolds, CDUs (cooling distribution) and outdoor heat rejection. There are six common heat rejection architectures for liquid cooling where we provide guidance on selecting the best one for your AI servers or cluster.

Date: 03 Jun 2025 | Type: White paper
Languages: English | Version: V2
Document Reference: SPD_WP133_EN

Files

File Name
WP133_V2.1_EN.pdf
Your browser is out of date and has known security issues.

It also may not display all features of this website or other websites.

Please upgrade your browser to access all of the features of this website.

Latest version for Google Chrome, Mozilla Firefox or Microsoft Edgeis recommended for optimal functionality.
Your browser is out of date and has known security issues.

It also may not display all features of this website or other websites.

Please upgrade your browser to access all of the features of this website.

Latest version for Google Chrome, Mozilla Firefox or Microsoft Edgeis recommended for optimal functionality.