Advertisment

What makes a data centre energy efficient

author-image
CIOL Bureau
Updated On
New Update

BANGALORE, INDIA: Roger Schmidt, IBM Fellow, National Academy of Engineering Member, IBM Academy of Technology Member, says that even the most passive burning power plant gets only 35 units of the energy out of the total 100 units put into it. And when this 35 units go through transmission lines, it again tends to lose five more units of energy. So, ultimately only 30 units of energy go into a data centre from the power plant.”

Advertisment

Now, if you think the plant can use at least one-third of the total 100 units energy meant for it, then you are wrong. The 30 units of energy is shared among IT equipment and data centre infrastructure, equally, inside a data centre.

In an exclusive interaction with CIOL, Roger told Deepa Damodaran that, "In the x86 processor market, the utilisation is roughly between 5 to 12 per cent. So we start with 100 units and get 5 pc utilisation down the line. That makes only one percentage of the original count. One-third of the power goes into cooling and one-sixth goes to the power distribution and one-half goes to the IT equipment. One of the main challenges today is how to reduce the power and cooling of the infrastructure and also improve the energy efficiency of its equipment. Excerpts

CIOL: What are the best practices to gain energy efficiency inside a data centre?

Advertisment

Roger: A lot. One is to measure and monitor what you have within a data centre and also within IT equipment.

Temperature, particularly, humidity, and pressure inside a data centre need to be measured, so that we get to know what is good and bad inside it.

Too much of heat and too much of cold are bad for a data centre because heat is bad for the equipment and too much of cold means you are wasting a lot of energy.

Advertisment

Data centre managers can identify these hot or cold spots and start making changes, such as moving around perforated tiles, blocking cable opening in the raised floor through which air comes out but doing no good, checking ventilation system, and even the temperature between server racks and underneath the floor.

Half of the air that comes out of air conditioning units does not go to the targeted areas, i.e servers. Fifty per cent of it is lost through leakage. So plug the holes.

Now, while walking through the server row, where the cool air moves, if you are able to look through the rack and see the other side, it means hot air is probably coming back through that and mixing with the cold air this side. Plug it. These simple things are not done usually.

Advertisment

Now, if humidity and moisture content is too high, i.e. above the recommended 60 pc relative humidity, then you need to look at de-humiditification. The recommended level is below 60 pc and above 20 pc.

Not just this, but temperature inside IT hardware also needs to be measured and monitored.

We have seen about 15 to 20 per cent of energy savings on  IT equipment, if we look into such issues, and this is a big number. So if you can save that energy, then that can be in turn utilised to power up IT equipment.

Advertisment

CIOL: What are the steps being taken to increase the utilisation of energy inside data centres?

Roger: We have all grown up for years thinking that cold is the best for electronics. We need to grow out of this notion.

Typically, a data centre is very cold and it is not because of the envelope or because it is meant to be.

Advertisment

The recommended temperature envelope for IT equipment today is 18 to 27 degrees Celsius. Despite it, data centres have been running at 18 degrees Celsius.

We need to educate clients across the world to turn the thermostat up to 25 or 27 degrees Celsius, which we IT manufacturers also recommend. It need not have to be cold always.

The four major things that IT equipment require are temperature, humidity, particulars, which are dust, gas, and pressure. If you are able to maintain those four requirements, then we IT manufacturers are happy.

Advertisment

On the power part, we have some power inefficiencies because power that starts from a sub-station building gets transformed from AC to DC, and then when it moves into a data centre building it transforms DC into AC.

It is because power inside a chip is one volt, whereas outside the building it is 14,000 volts. Thus, this 14,000 volt has to go through all these conversions.

One piece of pie is to join power and cooling and pool them so that we can go to DC, high voltage DC, power distribution.

The telecom industry uses 48 volts DC distribution, so we need to take that kind of high voltage DC in to data centre also.

So that the conversion from AC to DC and then back to AC from outside the data centre to inside it through a wall need not have to happen anymore and we can just plug DC right into our servers.

IBM today has server systems, z mainframe and power series, where we can plug in high voltage DC. This not only addresses the power loss issue, but also improves the efficiency of IT.

Next: Data centre temp envelope grows, but no equipment

tech-news