Liquid Cooling Evolution

There are many challenges facing data centres and the best solutions may not be the most obvious ones

shutterstock_762800653.jpg

We know technology is evolving as software, hardware and services converge. This, of course, presents challenges, but also opportunities and new ventures. We face significant commercial and regulatory changes that will drive faster IT refreshes, server density and the need to consume fewer resources natural and otherwise per server. This change is expected and foreseeable. But the change that people are not expecting or preparing for is that traditional ways of handling compute power won’t work: the data centres of today simply can’t scale up to the performance and energy challenges coming fast and furiously at us. There are four factors driving this change: people, place, power and planet.

The people factor
Considering the exponential growth in Internet usage we’ve seen over the past 15 years, it may be surprising that only around 51% of the world’s population has Internet access.
This will continue to rise and at a faster rate of adoption than seen previously.
Today, around 54% of the world’s population live in cities, and ten times the population of London is moving to cities globally every year. Accordingly, this figure is expected to rise to 66% by 2050. Growing urbanisation will also fuel growth in the middle class, and middle classes need goods and services to consume. The middle class in Asia alone is expected to be some 3 billion strong by 2030.

As well as driving further growth in Internet usage, this growth in urbanisation and membership of the middle class will create all kinds of opportunities in emerging technologies such as AI, VR/AR and autonomous vehicles. The opportunities these consumers will provide for dominance,
growth and profits are almost incalculable. We’re not limited in ambition, creativity or means – the limiting factors are things like water and power.

Data centre power
Despite CPUs continuing to get incrementally faster, the thermal overheads have remained roughly the same. Heat loads have only had to deal with relatively low power chips. 

Most data centre operators, and server manufacturers have been set up for this and know how to deal with this much heat. But CPUs are about to get hotter, a lot hotter.

Heat loads on next generation of CPUs and GPUs are heading beyond the 200W per chip up to 400W and even 500W – the natural corollary of their increased processing power. Using air cooling as at present will cause density to decrease, completely destroying performance and power utilisation efficiencies – it just doesn’t work.

Regulation is in the works that will make it increasingly difficult to run an inefficient or wasteful data centre – this will make refreshes more frequent and will, despite the hotter CPUs, demand greater density. This change will be imposed on the industry. More positively though, to take advantage of the opportunities presented by these changes, speed and proximity will become the key to winning – we’ll have to change, but there are great opportunities for competitive
advantage in embracing that change.

High quality and readily available bandwidth and cheap sensors have combined to create the IoT – billions of connected devices processing billions of pieces of information every second. Although the phrase is somewhat hackneyed, edge computing and its impact continue to grow. Although you should do all you can there, not everything can be done in the centralised cloud for various reasons: criticality, latency and data sovereignty/security being chief among them.

Power and location
Over the next 20 years or so, it’s predicted much of the world’s economic growth will come from Africa with less developed parts of Asia and South America also likely to go from strength to strength. For these places to truly participate in the global economy and take advantage of the
opportunities presented by technological progress, they’re going to need data centres, and these consume lots of power. 

Generally speaking, these locations are hot and experience power and water poverty, which makes building data centres with traditional cooling techniques challenging. Every kilowatt
of power used on cooling is not being used to provide more compute power.
It’s clear that we’re going to be requiring ever bigger and more numerous data centres for some time to come. From an environmental standpoint, data centres consume a staggering percentage of the world’s electricity with experts predicting that by 2025, ICT will account for 20% of the world’s electricity usage and contribute more than 3.5% of global carbon emissions – more than aviation and shipping. When you consider that cooling currently accounts for around 40% of data centre energy usage, it becomes apparent that for technology to continue to keep pace with
our demands without damaging the environment, we’re going to have to seek alternative solutions.

Electricity use and carbon emissions aren’t the only resource issue facing the data centre: they’re also consuming massive amounts of water, with a typical data centre getting through the equivalent of an Olympic sized pool every day. And water usage isn’t simply an economic
or environmental issue, it’s also a political one. Especially in places with water supply challenges, such as California – home to the world’s biggest tech companies and their hyperscale data centres.

Time for liquid
So, what’s the solution? Whether apocryphal or not, industrialist Henry Ford is credited with having once said, “If I had asked people what they wanted, they would have said faster horses.” Today, those with a vested interest in the status quo would have you believe that what you need is colder air. 

What’s really needed is new cooling methodologies and formats. So, how about liquid cooling? After all, liquids have thousands of times the heat transfer capacity of air and some industries have been taking advantage of this for years.

Before the data centre industry takes the plunge and embraces liquid cooling, liquid cooling needs to be able to achieve certain things: it needs be able to reduce capital expenditure through reduced complexity, be available in familiar and data centre ready form factors, be safe and
free from risks of leaks or fires, have a fast ROI and low TCO, simple to integrate with existing infrastructure, be easy to deploy and use as little water as possible.