
Categories
The latest publication from ASHRAE TC 9.9, Emergence and Expansion of Liquid Cooling in Mainstream Data Centers, is now available for download here.
Brief summary of this white paper
The IT industry is at a performance inflection point. When purchasing a new computing device, it is expected to be more powerful than the previous generation. As for servers, over the last decade the industry had a period where significant performance increases were delivered, generation over generation, accompanied by modest and predictable power increases. That period ended around 2018.
Large power increases in the compute, memory, and storage subsystems of current and future IT equipment are already challenging data centers, especially those with short refresh cycles. The challenges will only increase. Liquid cooling is becoming a requirement in some cases, and should be strongly and quickly considered.
This white paper explains why liquid cooling should be considered, rather than the details around what liquid cooling is or how to deploy it.
Introduction from the white paper
Large increases in IT equipment power will require additional equipment energy use and cooling resources will result in fewer servers per rack. During the 1990s and early 2000s, IT equipment power draw increased regularly. At the time, nameplate power was the typical planning metric, so a refresh may not have been that problematic. This paper will address three time frames: the early time frame where power increases were acceptable, the period following where power remained relatively constant, and the current time frame where power draw is again on the rise. Now that more accurate power levels are the data center planning metric, there is no longer a comfortable margin of power and cooling overprovisioning resulting from the use of the nameplate metric.
Liquid-only processor chips are currently available, and more are coming in the near future. There are many who wish to put off the introduction of liquid cooling into the data center due to its cost and complexity. There are data center impacts associated with the continued push for higher and higher levels of air cooling. These impacts are likely mitigated by liquid cooling and should be a point of consideration whether or not the data center is considering using liquidonly chips. One of the unintended consequences of increasing chip power is the need to reduce the case temperature. The case temperature, sometimes called lid temperature, is the temperature on the top surface of the chip, typically at its center; this is often referred to as Tcase. Chip vendors characterize the relationship of the case temperature, an externally measurable location, to the critical internal chip temperatures. Tcase is used during the IT equipment thermal design process to ensure the chip is adequately cooled. With case temperatures decreasing in the future, it will become increasingly harder to use higher ASRHAE classes of both air and water. It is recommended that any future data center includes the capability to add liquid cooling in the design of the data center.
Download the white paper here.