Inefficient air-cooling systems in data centres are not going away any time soon. The vast majority of live sites, and many data centres in the planning stages, still utilise similar systems with varying degrees of inefficiency. Most organisations planning for small, remote edge locations are still clinging to ‘white space’, tried and tested, chilled air and fan assisted compute that perpetuates energy inefficiency.
Cooling constraints continue to inhibit compute capacity, even as direct greenhouse (GHG) emissions regulation comes into play, and although there are more energy efficient alternatives available for data centres and edge applications.
A year of disruption and the global shutdown caused by Covid-19 has ceded to a year which will be known for Data Gravity, a phrase that describes how data location and generation attract more and more applications and services to it. Given that IDG indicates data volumes are growing at an average of 63 percent per month, and by 2025 over 463 exabytes of data will be created each day, thus illustrating the continued requirement for new data centres.
Disruption can be good and increasing numbers of legacy data centres are realising that while their air cooling satisfies their 2-3kW server racks, it cannot meet the requirements of High Performance Computing (HPC). However, reconfiguring technology halls to accommodate liquid cooled HPC pods allows air and liquid cooling to co-exist more efficiently and opens new business opportunities. It can also positively impact the sites PUE or ITUE, depending on how accurately you like to measure your ITE energy effectiveness.
As compute density continues to grow, so the heat generated by the ITE equipment increases and as the industry moves forward the removal of heat will become more regulated. Therefore, we need solutions that allow heat reuse, local energy partnerships and infrastructure in order to share or resell that commodity to the community. Liquid cooling removes heat at a much higher temperature than air cooling, around 50oC, and technologies are becoming available to convert that heat into stored energy to be reused. Thus, creating a revenue stream for the data centre owners and reducing the overall OPEX.
Couple the data growth with Gartner’s predicted 50:50 model, where fifty percent of data will be generated and utilised at the user location, therefore necessitating a more agile and adaptable mindset in the design of data centres whether large or small, remote or in town centres. Data centres can no longer be grouped together around high availability power nodes, such as West London.
Across the country, where power distribution is limited, the data centre industry must be more effective in targeting where power is utilised within edge facilities. This means compute capability must take priority, and ancillary activity, such as cooling, must support energy efficiency, not detract from it.
Many cooling techniques require compressor-based air cooling; the technology is understood within the industry, it’s relatively reliable and resilient, however, it is by anyone’s reckoning, inefficient. Continuing to deploy air-cooling based edge data centres is simply not sustainable.
In light of the latest Scope emissions protocol, our edge data centres must be installed as highly efficient users of energy. Eliminating air cooling chillers and fans can liberate around thirty percent of the energy used by the site, and without the need to pull the chilled air through the server, the rack fans can also be removed, which could save around ten percent of the server’s power use.
Data gravity, coupled with the growth of data centres of all descriptions, has consequences for the whole industry’s supply chain. Liquid cooling is not industry critical yet, however with legislators targeting sectors with high energy and land use, the market must demonstrate its understanding of the requirement to implement technology that will help optimise energy use and improve sustainability.