Data centre and infrastructure design is at a crossroads, being driven by business requirements on the one hand and legislative and societal demands on the other. When did the industry’s purpose change from providing resilient data availability to its client base at a reasonable margin, to carrying the mantle of environmental, social and governance (ESG) programmes which demonstrate good global citizenship?

 

This sea change means our industry is now scrutinising old practices and becoming more receptive to new process designs and methodologies. A key area of design examination is in cooling, and rightly so, as currently the vast majority of data centres are air-cooled, which for the most part requires compressor based chilled systems to force air across the hot IT equipment (ITE). These systems are expensive to procure, have large physical and environmental footprints, require around 35-percent of the data centre’s total power budget to operate and are inefficient at cooling ITE. A combination of these factors is why the average data centre has a 1.67 PUE, not only that, but they offer limited opportunity for heat recycling.

I highlight this situation so as to discuss the often quietly spoken about capability of heat reuse, and how local and district heating infrastructure projects are now achievable. The industry must design for heat reuse from the ground up, although much of the current stock could also be converted to bring them into the circular economy. Harnessing the 35-percent of energy used in cooling and investing in server-level direct cooling, will increase the ITE share of the energy budget providing more compute capacity to customers. Direct cooling of the servers and hot ITE, introduces the infrastructure and processes that support heat reuse, whether internally or out to the community. We must resolve the inefficiency at the front end for the industry to become a major energy supplier to the wider community.

Over the past year, we at Iceotope, have been surprised by the level of interest in liquid cooling from across the industry, whether it is legacy sites looking to introduce a point solution for HPC systems, or modern colo sites that are targeting customers with AI-led requirements that need GPU rich compute environments. Modifications to the data centre and its infrastructure may at first, be challenging. However, the benefits can include fast ROI, and a major reduction of infrastructure energy use which improves PUE. Liquid cooling is gaining converts who realise that the technology also allows for a reduction in the overall data centre footprint, in line with predicted future smaller site requirements.

So, how do these front-end benefits assist in developing district heating? The infrastructure required for liquid cooling ensures that heat generation at the server is extracted at a much higher temperature. This is down to the dielectric coolants being around 1000 times more effective than air. This means when the coolant/water loop heat exchange takes place the water temperature is around 50oC, which can then be used in various ways, either to heat the data centre offices, or externally through an insulated pipe network.

There are numerous applications and companies that specialise in designing and implementing the network infrastructure for low carbon, recycled heat into the community. Today, data centres should view district heating projects as part of the organisation’s CSR commitment, as well as a new revenue stream and an opportunity to achieve net zero carbon status. Moreover, liquid cooling is influencing changes in data centre design and creating far more efficient and sustainable sites. These will become even more important to their local communities, as they start to heat schools, colleges, and community swimming pools.

Cundall Engineering Report

Executive Summary

Get the report

CPD Accredited Liquid Cooling Course

Helping the industry prepare for the ubiquitous deployment of liquid cooling solutions

More about the course