Liquid Cooling turned on its side

The development of Iceotope’s KU:L Sistem, the world’s first 1U immersion cooled server

By Dr. Andy Young, Head of Fluids and Thermal Engineering & Jon Halestrap, Business Development Director, Iceotope

In a modern datacenter, for every kilowatt of computing power that is deployed, there are potentially several hundred further watts of power required to support it. The majority of this energy is utilized in moving coolants around the datacenter: for the most part, this means air. Air is a difficult coolant to move to where you need it to be. It is an energy intensive process that also loses a lot of energy along the way. One of the major advantages of liquid cooling is that liquids are generally manageable; liquids fill containers and move around them in a much more controllable and predictable fashion which means you don’t have to put as much energy into moving them around as you would have to for air.

In order to address the challenges of the datacenter market, we looked at how we could adapt our current generation technology for this market. This lead us to investigate design changes such as 1U form factor, horizontal deployment, and backward compatibility for rack designs where immersive liquid cooling technology can have the most benefit.

This posed significant design challenges for our engineering team, mainly around moving from our vertical, blade-based system to a horizontal, retrofittable, rack mounted design. All these challenges we met and were overcome using CFD techniques.

We needed tools to enable us to take an approach to our problem solving and design that maximized the use of our computational resources and shortened execution timeframes. By employing the concept of the “digital twin” it enabled Iceotope to examine multiple variables across a wider range of design scenarios and achieve optimal system and component design. This strategy was supported by the use of Mentor’s 1D CFD software, FloMASTER and helped us overcome major design obstacles during the development of our new product: KU:L Sistem, the world’s first 1U immersion cooled server.

As previously mentioned, moving air around with a fan wastes a significant quantity of energy. The same applies for pumps used to move liquid if designed inefficiently. A liquid cooled server has to deliver the coolant to all of the IT components and the major heat sources, the heatsinks and other devices on the board. You have to bring the right amount of coolant to the right components but without over engineering. Over engineering adds costs both in terms of hardware and energy required for operation.

For this part of the design we were able to use FloMASTER to determine the pressure drop in the system for a range of potential flow rates also called a system curve. We used the design of experiments functionality to create the design space and the simulations were automatically run. This allowed us to create Figure 2 which plots the system curve against several pumps we were considering for the blade. This helped us understand which one to choose based on the flow rate along with the power rating of each pump.

Blade_system_Curve.png

After developing a working prototype to prove our concept, we used FloMASTER (Figure 3) to digitally calculate, tune and balance flows so that the correct amount of fluid would be delivered to each major heat source without complex over engineering.

In additional to balancing the flow in the blade we needed to balance all the other components that make up a datacenter rack. FloMASTER was used to balance flow to each component of the rack, both individually and as a whole system. We needed to build a ‘system of systems’, flow balancing at each stage, to ensure every 1U of technology in the rack has the right amount of coolant flow so that the demands on building services are minimized and the temperatures in the system are acceptable as shown in Figure 3.

Chassis_Thermal_Model.png

One of the other complexities of air cooling is you have to condition the air; you have to add moisture to the air to control electrostatic charge generation. While you can recover some of the liquid used in this process, a lot of it is lost. By using liquid cooling, you might think that is going to involve lots of liquid consumption and a higher rate of leaks – that is where the analysis software comes in. We designed the risk of leaks out of our system using FloMASTER. By designing a network of pipes to deliver coolant to the server chassis inside the rack without having any pressure spikes, or any stresses over joints or connections, leaks can be eradicated from the server room, Figure 4. This is great for the environment in terms of less water being wasted but it is also ideal for datacenter management.

Relative_Pressure_Spike.png

At a more prosaic level, we had to acknowledge that any datacenter rack system would have to allow the hot swapping of blades. We already use dripless, blind-mate connectors to ensure this operation is fundamentally sound, but it was essential that we engineered the complete solution so that any pressure spikes were removed. This was an interesting engineering challenge that needed advanced “out-of-thebox” thinking to overcome: without the ability to balance the flow, which is a fundamental part of FloMASTER, this would not have been possible without considerable time and resources.

Commenting, Dr Andy Young said “FloMASTER delivers great insight into how to develop a design to maximise value and utility. This combined with much reduced model set up time, extremely fast simulation speed and simplified work-flow will make a big difference to our team.”