Top concerns for datacentre operators are ensuring their facilities are operating as efficiently as possible from a cost, energy consumption and IT utilisation point of view – and with good reason.
Tom Christensen, CTO at Hitachi Vantara, tells Computer Weekly that his company found across 1,200 enterprise datacentre assessments that better systems utilisation could slash the total cost of ownership (TCO) by an average of one-third.
Some savings can be made in six to 12 months, says Christensen. So where should datacentre operators start? Computer Weekly quizzed a series of datacentre market experts to get their top tips on where to begin.
1. Understand who you are and what you have
Marc Garner, UK and Ireland vice-president of datacentre energy management systems provider Schneider Electric’s Secure Power division, suggests examining “the human factor”. Look at your in-house skills, the people processes, and consider the assets you have spread across the datacentre. If you need an automation architect or other specialist, can you get one?
“Do you have a clear view, a clear vision and strategy of what you want to achieve – better energy efficiency, or sustainability from an output?” says Garner. “Can your organisation and people get behind it? Because behaviour is the key fundamental, yet more often than not, we jump straight into trying to fix the system itself.”
Nick Ewing, managing director at EfficiencyIT, points out that efficiency problems are often down to human error. “If I had my choice, I wouldn’t let human beings into it,” he says. “However, the overarching theme is get visibility.”
Hitachi Vantara’s Christensen says detailed datacentre snapshots often reveal surprises. “We found average storage utilisation for VMware environments of 47%, average memory utilisation only 34%, CPU 29%, average storage capacity utilisation 62% and switchport capacity 78% – people are using the switching they buy,” he says. “Storage allocated but not used: 0.6%. File share data not accessed for six months: 72%.”
2. Invest in data monitoring and management tools
Martin Hodgson, Paessler’s UK and Ireland country manager, says maximum system utilisation means measuring and monitoring system health, whether you are talking about hardware, preventive maintenance, security, communications or environmental management. Pinpoint areas of weakness and strength, and redeploy resource to reduce total cost of ownership or boost resilience.
“A thorough audit is the first port of call,” he says. “Many customers invest in new technology before actually accurately baselining where they are today.”
Performance can then be benchmarked via best-practice guidelines from the likes of ASHRAE.
“By referencing thermal guidelines and actually applying them correctly, savings can be made,” says Paul Finch, chief operating officer at Harlow-based colocation provider Kao Data. “You need certainty that whatever is proposed is actually going to pay dividends.”
Mike O’Keeffe, vice-president at datacentre design firm Vertiv, notes that monitoring is often already budgeted for internally and can help you understand power consumption. Are the chillers on all winter, with systems remaining well below temperature set points, for example?
Sensors today enable accurate measurement across more parameters. Look at airflow, floor cavities, blanket plates, set point containment and right-sizing of the power train. A lot of large uninterruptible power supply (UPS) units can be running at 20% load, says O’Keeffe.
“An older datacentre can be very inefficient,” he adds. “We run audits with customers that provide sufficient data to enable some of those decisions.”
3. Optimise cooling – everywhere
Oliver Goodman, head of engineering at Docklands-based colocation provider Telehouse, says tackling airflow management issues is typically an easy win when it comes to making facilities more efficient. But in colocation sites, this is not always easy to alter.
“If we make sure that for every cubic centimetre of cold air that we produce, passing through a server or a piece of equipment within the data hall, that there is no bad signal or short-circuiting, then we know that we’re only cooling what we need to cool,” he says. “That improves significantly the efficiency of what we do.
“What I would love is for customers to use the space that they have taken off us to its maximum ability, using 70%, 80% or 90% of that load. Our systems are designed to operate more efficiently that way.”
Kao Data’s Finch notes that even hot aisle and cold aisle containment is often not optimal, and improvements will drive greater reliability as well as energy efficiency. “Ultimately, this is going to really have an impact on the customer’s bottom line as well, so it does tick every box,” he says.
Sam Prudhomme, vice-president of sales marketing for mission-critical environments at Subzero Engineering, backs this view. “We can get immediate return on investment by blanking off certain areas, preventing hotspots with really simple containment apparatuses,” he says.
“Perhaps row number five, rack number six is 15° hotter than everything else? Do I just need to open up a little bit more in front of that rack? Those are things you would do immediately.”
Prudhomme also suggests looking at fan speed and modulation.
4. Power efficiency and upgrades
Operators could then consider whether their power and cooling systems are “actually good enough” for requirements. That, of course, takes you back to computational fluid dynamics and monitoring, says Prudhomme.
After cooling, look at the power system – and once again, data from monitoring and management will suggest where to make changes. Likely quick-win improvements may include updating to lithium-ion batteries or swapping out legacy infrastructure – including UPS units that are more than a few years old.
Not every server, rack or other piece of hardware might be needed, either.